This piece originally appeared on Finance Magnates, May 10, 2016.
A young European musical act takes the stage in Hamburg. Their meteoric trajectory will be historic, and their music will be played worldwide decades after their last concert. The public’s reaction to their stardom, which would include uncontrollable shrieking, fainting, and crazy stalkers would be described as a “mania.”
Who am I describing? Most people would conclude the celebrities in question are the Beatles. What if I provided a critical additional piece of information: the year is 1840. The Beatles came through Hamburg in 1960. The above passage actually refers to piano virtuoso Franz Liszt.
I present this as an important illustration of how adding a granular and accurate record of the time relative to the question-at-hand leads to a completely different conclusion of who did what, and what actually happened.
Without this critical piece of data, we tend to draw conclusions based on what we think we know, and expect an answer to be. Conclusions based on inaccurate or incomplete information are dangerous when reputations and businesses are on the line.
In the recently published Regulatory Technical Specifications (RTS), ESMA takes a major step forward in trying to achieve meaningful transparency into the world of algorithmically traded financial instruments.
Specifically, the team at ESMA and the respective competent authorities have recognized the fundamental importance of having all reported event data be precisely timestamped and accurately synchronized to the world time standard, Coordinated Universal Time (UTC).
RTS-25 requires all trade-execution-related events to be captured and stored (for five years) with a timestamp precision of 1 microsecond or better and a minimum accuracy of 100 microseconds relative to UTC. This is specifically for electronic trades executed on trading venues with order execution response times of 1 millisecond or faster, effectively almost all electronic trading venues.
Why is ESMA mandating that investment firms now clock-sync their data to this level of precision and accuracy? Within RTS 25, a very clear explanation of intent of the rule is described:
Competent authorities need to be able to reconstruct all events relating to an order throughout the lifetime of each order in an accurate time sequence. Competent authorities need to be able to reconstruct these events over multiple trading venues on a consolidated level to be able to conduct effective cross-venue monitoring on market abuse.
The concept and intent of RTS 25 is clear: it is about causal understanding. If two causal events (A and B) occur, you need to be able to accurately answer the question: did A cause B or did B cause A? To correctly determine the sequence of events with sufficient accuracy, as is the intent of RTS-25, one has to determine the time at which each event occurred.
The event that happened first can be concluded to be the root-cause of the event sequence. Then, we can determine the order of events by creating the event sequence timeline, which requires a simple ordering of the time for all events.
But does the currently specified precision and accuracy for time in RTS-25 meet the intended requirements of competent authorities’ use cases, such as surveillance, forensics and detection of market abuse behavior across multiple trading venues and their members? To meet the needs of the use cases we must see the events of interest with sufficient resolution.
The key determinant is the frequency of the events. Nyquist sampling theory suggests we need to sample at twice the frequency of the maximum event frequency to reproduce the event sequence with sufficient fidelity (confidence).
In the case of MiFID II, the fundamental events of interest are executed trade orders. In the EU equities markets, the response times for orders placed on venues are of the order of ~100us. Of course, some venues are faster than this. Other events, such as the algo response time from market tick update to order placed is an order of magnitude less, or faster than 10us.
Therefore, we can say the accuracy of time needs to be at least better than the minimum period between events of interest, or 10us.
But 10us is not sufficient to accurately replicate the event sequence. Nyquist tells us that we need to sample at twice the frequency of the maximum event frequency.
Therefore, we need to have a timestamp accuracy of at least 5us to answer the above questions with confidence for reportable events that happen on EU equity markets.
This is not a matter of opinion, it is simply the math. For some, this might be a surprise—but everyone will have to catch up to compliance, and fast. Without sufficiently accurate timestamped data, you might think you are hearing the Beatles when you really are hearing Liszt. Or even worse, you might be hit with a compliance audit and find that you cannot definitively prove what happened.