The Flash Crash. The Facebook IPO. The Knightmare. The BATS IPO. We’ve seen how so-called “glitches” can have huge effects on the market and even the global economy. With the growth of automated trading and the removal of humans from the trading loop, the potential for seemingly minor technological problems to have a detrimental impact on markets and market confidence has become even greater. In order to mitigate some of that risk, the SEC recently adopted a rule known as Regulation Systems Compliance and Integrity, or RegSCI.
Donal O’Sullivan, our VP of Product Management, and Fergal Toomey, our Chief Scientist and Co-founder, recently sat down to talk RegSCI - what it does well, what it does not-so-well, and what it means for the financial business.
We’ve known that RegSCI has been in preparation for 2 years, before some of the more recent high-profile technology failures, so I think we can give the SEC some credit for foresight and thinking proactively. They recognised the threat to market confidence posed by automated trading systems. The regulations should definitely result in fewer outages, faster recovery, and better post-incident diagnosis of complex inter-party market events.
Thinking back to the Flash Crash, I know it took many months to reconstruct and understand the details just because of the overwhelming variety and less-than-optimal quality of many of the data sources that needed to be gathered and analyzed, so any improvement of the systems for collecting that data is a big step in the right direction.
I agree that the SEC is taking some timely action and, for the most part, trying to stay current with the ubiquity of automated trading, although they should be careful here not to rest on their laurels. Automated trading is constantly evolving, but it is encouraging that the SEC seems to be doing its best to keep up.
One of the big concerns, however, is that RegSCI simply does not cover enough of the significant market participants. It would not cover or protect, for example, Knight Capital, which caused significant damage to equity market confidence. I think there’s an argument for a broadening of the Reg to cover any significant market participant.
The counter argument is, of course, that the Knight incident did not really cause widespread market disruption, but resulted in the collapse of Knight itself. That should be enough incentive for any business - meaning, if you do not have adequate safeguards in place, you could be wiped out in 45 minutes. So perhaps the question is: what better incentive can regulators provide to encourage firms to safeguard their trading systems? The SEC did end up hitting Knight with a fine for breach of other long-standing rules.
The collapse of Knight resulted in a huge flight of capital from the equity markets, and since the equity markets are all about capital formation, supposedly, you could argue that this did result in massive market disruption over a longer timescale. Perhaps we are too used to thinking in micro- and nano-seconds in today’s electronic trading, but incidents like Knight Capital definitely have longer-lasting consequences. To draw the comparison, where Corvil monitors for split-second problems, the SEC monitors and regulates for issues that are drawn out over months or even years.
Very true. For cases like Knight Capital, we should also remember that the Boards and Risk committees of the major market participants have definitely not ignored Knight Capital. How could they? We’ve seen a dramatic shift in the attitude towards technology since the Knightmare. There’s now great awareness that “glitches” - which is maybe far understating what these issues actually are - can sink you in minutes. We see much more of a “belts-and-braces” approach to risk data gathering and management now.
Yes, there have certainly been some huge changes in the last 24 months. We’ve seen a dramatic shift in budget allocation away from technologies that aim for greater speed and responsiveness, and a big increase in spending on technologies that mitigate risk and demonstrate compliance. In our own business, we used to see about 75% of our revenue from latency and performance deployments, with 25% from data analysis for risk management. Today, the split this has probably reversed, with at least 75% coming from data analysis for risk and compliance management.
Another issue: it is still unclear who will set the bar in terms of appropriate systems and procedures. Existing infrastructure is fast and effective at trading, but can be splintered and non-normalized. It is likely there will be a period of comparisons and level-finding before a new "norm" emerges, at which point it might be prudent to evaluate the state of the financial world once again.
And during that normalization period, firms will be thinking about the cost to their own businesses. The cost to firms to implement the regulations are still not fully understood, but are likely to be significant. There will likely be foot-dragging because of objections to costs incurred. But, as you said before, a big step in the right direction.
Definitely. It may wear thin in a few places in the years to come, but the combined innovative momentum of the financial world and the regulatory foresight of the SEC will hopefully combine for an effective safeguard against overwhelming risk and instability. I think most of all, these regs build and protect market confidence; and no matter how much money is floating around in the world and how fast it can move, we can’t have a real market without that confidence.