interview for the High Frequency Trading Review

Discussion in 'QuantDeveloper' started by cbi_luoy, Nov 1, 2011.

  1. An Interview with Stephane Leroy
    Sponsored by Quant House

    In this interview for the High Frequency Trading Review, Mike O’Hara talks to Stephane Leroy, Head of Global Sales and Marketing at technology vendor Quant House, an independent global provider of end-to-end systematic trading solutions.

    At Quant House, Mr. Leroy is responsible for developing and executing the company sales and marketing strategies.

    High Frequency Trading Review: Stephane, maybe we can start with you giving me some background on Quant House, who you are and what you do.

    Stephane Leroy: Sure. QuantHouse is a global company, created as a spin-off of a high frequency trading hedge fund. We are quite solid in the sense that one of our shareholders is Newedge, one of the largest prime brokers in the world. They own 25 percent of the company, so they don’t control us but they gave us the capital to develop our leading edge technology and also to maintain our unique fiber optic global network.

    We have more than 100 clients throughout the US, Europe and Asia. Since the beginning, the mission statement for QuantHouse has been to provide next generation trading technology to support the hedge fund community. We work with the most important hedge funds in the world, as well as prime brokers, MTFs and exchanges.

    We have what we call an “end to end solution” for our clients, which we break down into three product sets. QuantFEED gives our clients everything they need to manage any market data related topic, from standardization, to storage and replay, to communication and so on. Exchanges such as BATS Trading, TOM MTF or Turquoise from the London Stock Exchange group are using this to feed their Smart Order Routing, and some of the hedge funds who use this technology include firms like Systematic Alpha and Quantitative Investment Management.

    We have also QuantFACTORY, which is a .NET development framework. In short, it’s a next generation trading tool that allows clients to research, develop, back test and execute trading models. Companies such as Credit Suisse, QIM use this framework to develop their models and run those models on the markets.

    The third leg of our product portfolio is QuantLINK, which is a set of trading infrastructure services where you can find proximity hosting, co-location, our order routing transportation layer, direct market access, basically any service a client needs to handle their trading infrastructure.

    So with QuantFEED we have clients who detect information before their competitors; with QuantFACTORY, we have clients who launch new models before their competitors; and with QuantLINK, we help clients to match their order flow before their competitors.

    HFTR: Is that how you mainly differentiate yourself, through speed?

    SL: Speed is important yes, but it’s far from being the main and only factor. It is also very important to understand that we are a software house, so we control the end-to-end process in terms of R&D. This is quite a different approach to some of the other vendors in the HFT space, who are primarily integrators who don’t have any IP (Intellectual Property), they just buy and resell technology, they don’t create anything or own anything outright. So if something is new or if their client has a very sensitive technical issue, they and their clients have to rely on someone else to eventually help. If they run their network on a carrier outsourcing service for example, then they’re reliant upon their carrier to manage their routers, their network infrastructure, their fiber and so on and so forth. Whereas here at QuantHouse, we manage the entire trading infrastructure. We’re not only in full control of the middleware piece, but also of the end-to-end trading infrastructure from the data capture process to the data distribution including the data standardization.

    HFTR: OK, so in terms of where you see yourself competing in the market, you’ve indicated that on the one side you have integration firms who might provide a low latency infrastructure, but on the back of other vendors’ products and services they’re reselling. But presumably you’re also competing with the legacy market data vendors, correct?

    SL: Yes, on the QuantFEED side more and more of our clients are requesting us to either replace the parts of their ticker plants involving legacy technology or to replace that legacy technology altogether.

    With QuantFACTORY, although we compete with the other software houses, our product is quite unique in the market. So if you were to ask me who we’re competing with in this space, it’s actually our clients themselves, with their own internally developed solutions. Often when clients talk with us, they want to assess their own level of technology before they eventually switch over to our technology. So we compete against their internal IT departments most of the time because they’re specifying the capability of our technology against what they are able to do themselves. Most of the time, they sensibly reallocate their resources to something other than their development framework or their feed technology.

    HFTR: In that case, how do you actually work together with those clients who might traditionally take the approach of developing their own systems? Could QuantFACTORY give them the ability to continue building the models and the algos themselves, but on your next-generation development framework to speed up their development and back-testing process, for example?

    SL: Yes, absolutely. QuantFACTORY is in fact a plug-in within Visual Studios. In other words, you don’t have to learn any scripting language or any proprietary development language to use QuantFACTORY. You just leverage your knowledge of industry-standard languages such as C# or C++. So yes, as a client you can leverage what you’ve done so far with your own model development into QuantFACTORY, because it’s going to be the same development programming language. In addition to that, you can use the thousands of APIs and callbacks and functions from QuantFACTORY to gain time and speed up the development process.

    HFTR: OK, well maybe we can take a step back and look at why high frequency trading is forcing the trading community in general to move away from the old legacy systems. If a firm is involved in automated trading in any way, what do you think are the main reasons why they might need to move away from a typical legacy infrastructure to a complete “next-generation” type technology? What kind of problems do they have?

    SL: They have a number of problems, starting by being able to keep up with the rapid pace of change.
    The entire market is switching from screen based trading to systematic trading technologies. As a consequence, those firms who are systematic trading technology early adopters, see their number of potential competitors growing significantly. In addition to that, the technical barrier to enter this market is now accessible leveraging technology providers such as QuantHouse.

    The message of QuantHouse to this sub community is to reallocate their resources on model development rather than technology development leveraging our state of the art solutions.

    In a more specific way, there is a big issue of data quality for example, particularly the granularity of data. This data quality aspect is really critical. For example, all the legacy data vendors perform some kind of data consolidation. If the user is a human being, who cares, right? But, if you’re trying to apply this kind of technology to a systematic trading firm, where they develops models and do back testing, they want to have the smallest time stamp to make sure they’re able to do the back test with the highest quality, it’s a complete disaster to have data consolidated because back testing is already an assumption of the market. So if you’re making an assumption on data that’s not the pure reflection of the reality as it happened, it’s an assumption on an assumption. So, it’s really a disaster!

    In terms of back-testing, the quality of the data is absolutely critical for good results. You need to have the real feedback of how a model would perform in real time. Using legacy data during your back testing process is like putting water in the fuel tank of a Ferrari. It won’t work. Legacy data vendors are building on technology now, which for the last 20-25 years was designed for human beings and human beings only. And a human being is a really poor machine because we are only able to see one piece of information every second whereas in that same time period, a machine is able to see millions.

    The concept of back testing doesn’t exist in the screen-based trading world, because a trader doesn’t care about a back test, for him is just a curve on a screen and that’s it. In reality, back testing is a completely new thing for the financial community, including the legacy data vendors. But the reason why they were not, they are not, and they will not be able to adapt their existing technology, is because that technology is really too old to keep up with this kind of change. The best way is to start from a blank sheet, but legacy vendors can’t do that because they have their legacies to manage. So that’s problem number one. In addition to that, they have outsourced part or all of their software R&D as well as their network infrastructure. As a consequence, the level of control and technology edge they can provide to their client base looking for innovation is very limited.

    Problem number two is the speed to see an opportunity, which is dependent upon speed of capturing market data. Legacy data vendors use third party networks to collect data from the exchanges, and they then bring that into a central point. Whereas with QuantHouse, we have a distributed network where we are co-located within each exchange and our optimization layer is right next to the matching engine.

    The moment we capture the data going out from the matching engine within the exchange, we start to standardize that data immediately within our technology. We optimize the data capture and data distribution process. So problem number two is the fact that legacy data could be basically just historical data. What happened three, four, five hundred milliseconds ago is, for us, just historical data.

    The third problem is that systematic trading requires brand new market data features, such as storage and replay, which just don’t exist in legacy systems. For example, a client might use our market data framework to create new signals and then publish those new signals to their existing black box using the QuantHouse API. We are able to provide these features as standard services, whereas they are completely new to legacy data vendors who are just not able to provide this kind of value-added technology. Most of the time, legacy data vendors are working in a mature market, all their products are commoditized and they have already outsourced their development, to India for example.

    As a next generation technology provider, the last thing we will do is to outsource something. We keep everything in house because technology is critical. So we have our own developers in house, we have our own network teams and fiber optic teams in house, we own and control and manage our own technology, because this is really key for us and, obviously, for our clients.

    HFTR: So do you think the market has now become a two tier structure, where one the one hand you have the high frequency traders who use this next generation technology of the sort that QuantHouse provides, and on the other you have more traditional investment firms using legacy infrastructure? If so, how disadvantaged are those firms?

    SL: I don’t think that the second layer is at a disadvantage compared to the first, as long as they don’t want to compete with one another. They are just not in the same space. But the moment those guys want to compete in what we call “technology sensitive” trading, if they stick with legacy data, they won’t be able to participate. It’s just not possible.

    What companies such as QuantHouse do, is we help the creation of an intermediary layer between those tiers you’ve identified. These companies are being created from scratch, with people coming out of prop desks from investment banks as these desks become disseminated in the markets. And we help those guys to compete with the tier ones because, although we have a couple of those tier one firms using our technology, there are only maybe 10 to 15 really huge high frequency trading firms out there. They use their internal technology. And what we do is help the mainstream compete against those guys without the burden of developing the technology themselves and, of course, the huge investment in terms of human resources and financial resources.

    HFTR: So how do you go about helping that intermediary layer? How could QuantHouse help a start-up HFT firm or a systematic hedge fund put an infrastructure in place?

    SL: Well, we are able to provide the fastest infrastructure as well as the most efficient software attached to it, in a way that is very efficient for a new client, because of the fact that we have nothing to build, everything is built already. And we’ve standardized the way users are accessing our technology in a way that, in fact, the only time it takes for our new clients to start using our QuantFEED ultra low latency market data technology is just two or three hours. That’s it. Why? It’s just the time it takes for the hedge fund to integrate our API. Set up the API, and that’s it. And then it’s just, eventually, connectivity if they wants to be hosted with us. But from a technology standpoint, it’s just a couple of hours. So, if you want to build what we have built so far on your own, it’s going to cost you a huge time and financial investment. But in a couple of hours, this new firm will be able to compete against the tier ones, on the same foot.

    HFTR: In a fully hosted environment?

    SL: Absolutely. We host their black box, they integrate our API into their trading application, they use our direct market accesses across the globe, and off we go.

    HFTR: What if a firm is doing smart order routing across a number of exchanges, either running an equities trading book or running equities execution on behalf of their customers, requiring access to multiple trading venues and multiple exchanges around the world? That’s a pretty complex infrastructure to have to maintain. Can QuantHouse take care of all of that?

    SL: Yes, it’s just one API, and the client can then see all the production markets we are connected with, receive the service in a low latency manner and send his order flow to any of the 45 or so exchanges that we have direct market accesses with. Then in terms of risk management for example, the client is able to select whatever technology he wants, whether it’s homemade technology or technology available on the market, like FTEN, Ullink or Object Trading for example. In fact, once the client has selected his technology, we host that for him too and he is able to use that platform (or his broker is able to use that platform) to risk-manage the order flow.

    HFTR: Some firms seem to be moving more towards hardware-based solutions for some of the things you’ve described. Do you develop everything in software or do you use any programmable hardware, like FPGAs (Field-Programmable Gate Arrays) for example?

    SL: FPGA is an approach that is specifically designed to be implemented next to the matching engine, to descend directly to the raw feed, to normalize that and to send the normalized tick data to a trading application. An FPGA doesn’t appear as an ideal solution when the trading models are dealing with several exchanges, because the incoming data is coming from different sources. So, what’s the point, number one, of having an FPGA to eventually deal with micro seconds or pico seconds or whatever, where one of the sources of the data is in fact seconds or milliseconds away from where the chip is? Number two, an FPGA is a very fast but actually a very value-added feature limited component, in the sense that if the model is requesting a snapshot, or wants to have access to a directory lookup, or any value-added service a black box needs, the FPGA is not able to do that. It has to go outside the chip and to deal with the operating system. Here this process is absolutely not efficient compared to the process with a feed handler, for example.

    We don’t deny the fact that FPGAs might be a good solution for pure speed trading on just one exchange, when you’re doing something very simple. But if you are, it is just arbitrage on instruments, that’s it. “Speed trading” had a beginning and it will have an end. The future of optimized trading is going to be based around more complex and more sophisticated trading models, rather than just comparing one instrument against the other and trading the difference. That’s the reason why we believe the software approach will really be dominant in the years to come.

    C# or C++ and Linux are necessary as the “secret sauce” or the “winning cocktail” for high frequency trading firms when they deal with software technology. In fact at QuantHouse, we develop multi-threaded software design running on Linux 64 bits with an optimized Linux kernel.

    HFTR: And presumably you work closely with some of the hardware vendors like Intel and others?

    SL: Correct. In fact, if you Google QuantHouse, Intel, you will see that around three years ago, we were able to say that we were approaching 2.5 million messages per second with our feed handler. And it was Intel coming up with those figures, not us, because they use our technology to benchmark their latest chip generations. They did the same test a couple of months ago, and now it’s 5.55 million messages per second, which is quite significant. So we have hardware vendors who are partnering with us to assess the fact that their latest version of technology is really the best one.

    HFTR: Which other vendors do you partner with, whether on the hardware side, the infrastructure side, the operating system side, or the software side?

    SL: Well, we have Intel and then we have a very specific in depth relationship with the Linux community. In fact, we fine tune the Linux kernel so we have our own version of it, to make sure that it’s really tailored for data optimization and specific applications our clients are using. The other partners we have are more those who are used by our clients in combination with QuantHouse products. It might be OMS, SOR or Risk-Management providers such as ULLINK, FTEN, or Object Trading. Our clients are using those additional solutions, whether it’s brokers or hedge funds, to secure their trading processes.

    HFTR: In conclusion, I’d like to ask you about the total cost of ownership of these solutions and to get your views on the whole “buy versus build” arguments. What are your thoughts there?

    SL: You need to understand that a couple of years ago there were only just five or ten of the early adopters on this algo trading market in the whole world. They were able to build everything themselves and the reward was high because there were no offerings really on the market. But now, companies such as QuantHouse exist and it’s getting more and more complex for clients to deal with this kind of technology themselves. Just for market data with our QuantFEED product line, we are looking at around 40 or 45 exchanges here at QuantHouse and we have an average of one to two major changes every week to manage, whether it’s a protocol change from the exchange, an infrastructure change from the exchange, a data center move from the exchange, a symbology modification from the exchange, or whatever.

    And when I say more and more complex to maintain, it’s not only at the infrastructure level but also at the software level. So the cost to enter this market, if you want to build it yourself, is getting higher and higher.

    One of the key values we offer to new entrants, or to clients who want to just change their business model, is that we allow them to have the latest performance in terms of technology, without the burden of developing it and maintaining it. The total cost of ownership includes the cost of development, the cost of maintenance, the cost of infrastructure and all sorts of other associated costs. And if you add to that all the connectivity costs the different exchanges are asking you to pay for, at the end of the day it runs into millions. This, together with the accelerated time to market, favors the buy versus build approach. This is the reason why a lot of clients are coming to us to help them to out-task their existing technology, so they are able to allocate their existing precious resources and people on value added services.

    HFTR: Thank you Stephane.

    Biography
    In his role as Global Head of Sales and Marketing at Quant House, Stephane Leroy is responsible for developing and executing the company sales and marketing strategies. Before joining QuantHouse, Leroy was Area Manager for BT Radianz where he established and developed the BT Radianz activity since its formation in 2000 throughout Continental Europe. Mr. Leroy began his career in 1992 at Borland, where he held several marketing positions. He then joined AT&T Global Information Solutions France as a marketing and sales manager. He moved later on to NCR Global Services (formerly called AT&T Global Information Solutions) to become the Head of Global Account Sales for Europe. Leroy divides his time between the U.K., the US and Continental Europe. Mr Leroy holds a Master of Computer Science Engineering and Mathematics from ESME/SUDRIA of Paris. He holds also a Master of Strategic Marketing from HEC Paris.

    Related posts:

    HFT Technology, Staff and Innovation
    Outsourcing the Front Office, One-Wire Connectivity and Risk on a Chip
    HFT in a Box