It is fascinating to watch how FX trading systems are undergoing transformations similar to those that have already occurred in equities trading over the past several years. One central theme is that of performance, with our customers tackling a host of questions that cluster around a core set of performance concerns.
None of these questions have simple answers, and the answers depend on the particular needs of different players in the FX market. For example, take the second question: should I be faster or smarter? From a business perspective, this is a complex question, and the focus needs to be driven by considerations such as "Is this a pure market-making play?", "Is it an FX strategy only or is it part of a larger multi-asset strategy?", or "What is the timescale for holding positions?" which in turn is often determined by constraints on capital and leverage.
From a technology perspective, this question of "faster" versus "smarter" is easier to answer: you need both. This may sound glib, but the truth is that the appropriate performance is an essential part of a trading strategy. The right trading decision executed at the wrong time will be ineffective and unprofitable so, no matter how smart your trading decisions, they must be executed reliably and in a timely fashion. This is true of trading in equities and options too - in fact it is true of any business process with real-time constraints - so it is worth illustrating some of these considerations in FX with a concrete example.
We worked recently with an FX trading group when they were dealt with an interesting challenge. Their FX strategy was not performing as well as it was designed to, primarily due to occasional failures to execute in favorable trading conditions. It was not a high-frequency strategy, and traded at a very modest pace, and they first directed their attention at the strategy and its implementation in software. Their initial review took several weeks to complete, only to reveal that the problem seemed to lie not with the algorithms used but rather with the incoming rate quotes the strategy was fed with. These were not infrequently quite stale, and hence it was a latency issue. The FX group was able to leverage an existing deployment of Corvil in their organization to investigate the problem. Since the rate quotes were stale when delivered, they suspected their rates engine might be having performance problems.
However Corvil was able exonerate the rate engine and instead identify that it was a networking issue with a rather subtle cause: the rates engine was the other side of the Atlantic from the strategy, and the transatlantic connection was in good shape with a stable latency profile. The problem however was that at one point along the access from the rates engine to the transatlantic link, the quotes crossed a LAN segment suffering from occasional packet loss. The amount of packet loss was very low, and caused no problems for other applications running locally, but it was enough to cause significant TCP timeouts and retransmission delay for the rates quotes being delivered across the Atlantic. (This is a well-known problem in networking, where TCP connections with a high bandwidth-delay product are very sensitive to any packet loss.) The problem was then quickly solved with a simple re-route of the quotes over a different loss-free network segment.
The take-away from this anecdote is that, while the strategy was not a high-frequency strategy and didn't seem to have high-performance requirements, its success did ultimately depend sensitively on the performance of the infrastructure. The simple dichotomy between "trading faster" versus "trading smarter" can break down in interesting ways!