One of the more interesting panel sessions I attended at FIX Community Americas Trading Briefing discussed how the technologies being used in the buy-side and sell-side are changing and as a result how the buy- and sell-side roles are also changing.
The panelists agreed that on the one hand buy-side firms were becoming more empowered by technology “there are more ways of getting to more places” which means they can “take much more control of where their order flow goes and how it gets filled.”
On the other hand, this empowerment is also resulting in high levels of frustration with “the number of providers they need to integrate with” and “the amount of detail they need to understand about each provider’s algorithm.”
Some panelists believe that these challenges and frustrations are the key drivers behind buy-side adoption of technologies and data analysis capabilities traditionally seen on the sell-side. “Seeing some of the larger buy-side firms build their own trading research and analytics divisions.”
We see these trends as well within our customer base. There are a growing number of buy-side firms whose infrastructure looks like those we see in traditional sell-side firms. One can surmise that this is partly because they can be, that the technology and expertise is available for this to happen. The cost of low latency direct market access (DMA) has come down. It stands to reason that if a buy-side firm wants to invest the time and resources in building up its own execution algorithms and have direct control over execution strategy then it will be able to do so.
One interesting thing we’re seeing is that this infrastructure build-out is not limited to the large buy-side firms, but any firm (buy or sell side) with automated trading strategies that they want to tune and optimize for performance. For example, there are some trading strategies only work under specific infrastructure latency characteristics, such as getting your market data at a particular rate or accessing the available liquidity within a particular period of time. When the infrastructure underneath changes significantly (eg strategy or venue moves) it changes the conditions and throws everything off. The automated strategy is operating on the changed infrastructure is now potentially not as reliable in its execution and needs retuning.
This is one reason why transaction transparency becomes so important for algorithmic trading strategies. Accurately timestamped transaction data can enable you to understand precisely what an algorithm is doing not just what you intended it to do. With the right correlation analytics you gain insight into how the infrastructure characteristics impact your trading strategy performance.
While transparency can mean different things to different people – even among the panelists -- they all agreed that most of the buy and sell side technology trends are transparency related. The more transparency you can provide to your customers, the more confident they are in your service.
What we learned from working with the sell side is that you cannot separate the behavior of the infrastructure from the behavior of the trading algorithms. It is easiest to see the connection when the strategy is based on speed, as the example given above. However, automated strategies not based on speed have infrastructure performance implications.
Those “low speed” strategies are still based on an expectation of good execution performance. For example, when a trigger causes the algorithm to send an order out the expectation is that the latency performance is good enough for that order to get filled or hit. In other words, the performance of the infrastructure and counterparty shouldn’t be so slow that my order doesn’t get filled. We’ve seen cases on the sell-side where there is good transparency on how an algorithm is designed to work but opaqueness on how an algorithm is acting in machine time which produces some confusing and inconsistent results.
We agree with the panel that transparency “isn’t just about getting more data, because you just get information overload.” Instead new data should be analyzed with goal of providing more insight into “what your clients are expecting from the products that you are offering them” helping them understand where are their orders going, whether they are getting filled” and, equally important, why they are acting that way.