A debate has emerged with the SEC on the need to timestamp multi-party electronic trading data. What accuracy and precision is needed to determine the relative time between two events -- milliseconds, microseconds, or nanoseconds?
Originally published on WallStreet & Technology
I recently read Simon Sinek's book Start With Why. In the opening chapter, he gives a great illustration of how we often draw false conclusions that are confidently held, with data that is incomplete or inaccurate.
On a cold January day, a forty-three-year-old man was sworn in as the chief executive of his country. By his side stood his predecessor, a famous general who, fifteen years earlier, had commanded his nation's armed forces in a war that resulted in the defeat of Germany. The young leader was raised in the Roman Catholic faith. He spent the next five hours watching parades in his honor and stayed up celebrating until three o'clock in the morning.
Who am I describing?
Most people reading this article would conclude the person in question is John F. Kennedy, but you would be wrong. What if I provided a critical piece of metadata?
The date in question is Jan. 30, 1933.
John F. Kennedy was sworn in as the 35th president of the United States on Jan. 20, 1961. The person described above is actually Adolf Hitler.
This is an important illustration of how adding an appropriately granular and accurate record of the event time relative to the question in hand leads to a completely different conclusion of who did what and what actually happened. Without this critical piece of data, we tend to make conclusions based on what we think we know and what we expect the answer to be. This is very dangerous when people's reputations and businesses are on the line.
On Wall Street, a debate has emerged with the SEC on the need to collect multi-party electronic trading data and to provide sufficiently accurate timestamps. At the heart of this debate is the question of what is sufficiently accurate. Is it seconds, milliseconds, microseconds, or nanoseconds?
To answer this question, we must understand how the collected data will be used and what questions will be asked from the data. If we assume that the primary question to be answered by the SEC's proposed Consolidated Audit Trail proposals is "What exactly happened, and who caused it?," then one has to examine the quality and sufficiency of the data to answer that.
A number of important concepts need to be discussed and understood. First is the requirement to determine the correct sequence of events accurately. If two causal events (A and B) occur, you need to be able to answer this question: Did A cause B, or did B cause A? To determine the sequence of events with sufficient accuracy, one has to determine the time at which each event occurred. The event that happened first can be concluded to be the root cause of the event sequence. The order of events can be determined by creating the event sequence timeline, which requires a simple ordering of the time for all events.
The key determinant for the accuracy of the event time is the frequency of the events of interest. In our case, the fundamental events of interest are executed trade orders. In the US equities markets, the response times for orders placed on public exchanges are of the order of 100 us. The response time for other events, such as the time from a market tick update to an order being placed, is an order of magnitude less (i.e., less than 10 us). The accuracy of time therefore needs to be at least better than the minimum period between events of interest (i.e., 10 us). In fact, this is not sufficient to replicate the event sequence accurately. If we apply the principles from Nyquist sampling theory, we can reproduce with sufficient fidelity (confidence) the event sequence. Nyquist tells us that we need to sample at twice the frequency of the maximum event frequency. Therefore, we need to have a timestamp accuracy of at least 5 us to answer the above questions for all events that happen on US equity markets. This is not a matter of opinion. It is simply the math.
The second concept we must understand is that timestamp accuracy is not the same as timestamp precision; a timestamp may be precise but not accurate. Precision means the minimum time increment that a timestamp can record. (Note: Precision is often used interchangeably with granularity.) Accuracy is a measure of whether the timestamp is correct or not. Precision is a necessary but not sufficient condition for accuracy. It is not possible to have a level of accuracy that exceeds a corresponding level of precision.
For us, this means that we need a level of timestamp precision that is at least as good as the level of accuracy we expect. From the above analysis, we can conclude that, for electronic trading in US equities, we need timestamp precision of at least 5 us. In practice, we need it to be 1 us or better. However, it is critical to understand that simply having precision timestamps associated with collected data is the easy part of the problem. The harder part is assuring the accuracy of the recorded timestamps such that, when we look to recreate the event timelines or determine the relative time between two events, we can do so with confidence and trust the results.
Deriving accurate time happens at two levels:
Relative time accuracy implies that the timestamp is accurate relative to some other time event. For example, the latency between a market tick update and the placement of an order can be determined using relative time. We just need to know the time difference between events. Determining relative time does not require synchronization of all timestamps to a precision global time reference such as UTC (as defined by the NIST). However, absolute time accuracy does require synchronization of all timestamps to a global time reference. Absolute time accuracy is often preferred but harder to achieve, since everyone must be synchronized to the same reference time source.
Practically speaking for the levels of precision and accuracy we need, this means connecting to a master clock, most likely via a radio antenna to GPS, and then distributing the clock via Precision Time Protocol (PTP), the IEEE 1588 standard, to the actual trading machines. Accurate absolute time is the ideal, since it allows us to answer a greater variety of questions across all events that occur. This should be the goal.
In the case of US equities, we conclude that all buy-side, sell-side, and exchange trade data should be provided with less than or equal to one-microsecond precision timestamps and with absolute time accuracy of less than or equal to five microseconds to meet the needs of today's electronically traded US equities markets.
What happens if we don't do this and we continue with the currently proposed compromise between Wall Street and the SEC -- time accuracy in the low seconds range? As illustrated at the beginning of this paper, the cost of incomplete data or inaccurate time qualification is false conclusions on what happened, why it happened, and who is responsible. Lack of sufficiently accurate timestamps will likely cause innocent parties to be accused of financial mishap and potential wrongdoing. It will likely cause false explanations for critical market events. Finally, it will retard true understanding of market operational dynamics and the evolution of the US equities markets. This is not good for Wall Street, and it is not good for the SEC.
Why can't we get accurate timestamps for the trading data? Wall Street says such a requirement would introduce an excessively burdensome level of complexity and cost. All machines and software would have to be retrofitted with precision timestamping capabilities, and accurate time synchronization would have to be distributed via costly technologies (PTP) to all connected machines trading US equities. Yes, this is true if you follow this approach, but it's neither the easiest nor the least expensive way to achieve the goal.
It turns out that timestamping, time synchronization, and real-time data collection have come a long way over the past five years since the peak of high-frequency trading in 2009. First, it is not necessary to touch any software on any machines. One can do it all with timestamping of the network data; you can tap the network pipes that carry the electronic trading messages and timestamp the messages directly off the wire. In fact, this is superior for a variety of use cases, including risk mitigation and anomaly analysis. Nanosecond precision timestamping now comes standard in equipment from leading network vendors like Arista and Cisco.
Time synchronization is a little bit more tricky. Our starting point would be to recommend that all US equity exchanges (fewer than five entities) synchronize their network data captures for all trading activity to a time accuracy of at least five microseconds. This is not as complex as it seems. Most of them do this today. Next, we believe that all major sell-side broker dealers and market makers should synchronize their ingress and egress network captured trading data to an accuracy of five microseconds. This, too, is reasonable, since fewer than 100 institutions are involved, and many already have access to or are using GPS to synchronize their internal trading plants. Finally, for the buy side, it is somewhat impractical to ask all buy-side institutions to mount rooftop antennas and synchronize to GPS. There are many hundreds of these businesses.
Therefore, the requirement of precision time stamping of one microsecond or better with absolute time accuracy of five microseconds or better between all players in US equities is not only feasible today, but it is also practical and cost effective to achieve. With the recent news of billions of dollars in fines, untold losses, and lack of confidence in our markets, this seems like a very small price to pay to do the right thing for both Wall Street and the SEC.