Opening Cross: The Bear Essentials of Data Latency
![max-bowie max-bowie](/sites/default/files/styles/landscape_750_463/public/import/IMG/807/101807/max-bowie-incisivemedia-color.jpg.webp?h=ee12d8fd&itok=FIjEj0Li)
If, like me, you enjoy long hikes through the wilderness, you’ll be aware of the basics for staying safe: tell someone your route, carry plenty of fluids and layers of clothing, close gates behind you, and—when in bear country—make plenty of noise to alert bears to your presence so as not to surprise them. Should you encounter a grumpy grizzly face to face, there’s plenty of advice on what to do, but near-universal agreement on what not to do. Don’t bother running. A bear can easily outrun you.
In recent years, the metaphorical “bear” of the market data world has been latency, as evidenced by the plethora of stories in this week’s issue about speed, networks and latency monitoring: massive, unwieldy amounts of data hungry for ever-more volumes, and strong enough to overpower even the most robust infrastructures—yet which move at lightning pace. And like its furry counterparts, latency inspires fear in those who encounter it: according to a survey by Sybase, 28 percent of capital markets executives in North America believe data latency (together with regulatory issues) will be their top concern over the next three years, and will consume a “significant” part of their resources—a view shared by 31 percent of those surveyed in Europe, and 30 percent in Asia-Pacific.
At last week’s North American Trading Architecture Summit hosted by WatersTechnology, Yuri Salkinder, director at Credit Suisse, described his first rule of latency as “You don’t have to run faster than the bear, you just have to be faster than another guy also running away from the bear”—meaning that you only have to spend enough to be the fastest among your peers: you don’t get extra points for investing a gazillion dollars of your firm’s money to break the speed of light when nobody else is even close. (Note to self: first rule is now “Never go hiking in bear country with Yuri Salkinder of Credit Suisse”).
Throwing people under the bus—or in the path of the bear—isn’t new, of course. Countless hikers have escaped predators by distracting them (though usually with a backpack or item of food), and the same thing is becoming evident in financial markets, with firms—ostensibly accidentally—flooding exchanges with useless order or cancel messages. Nasdaq and Direct Edge both implemented penalties last month for firms with excessive message-to-trade ratios, and I expect exchanges to turn their attention to cancel ratios, in case unscrupulous firms who find themselves behind the latency curve attempt to disrupt the market with bulk dumps of message volume to fend off faster rivals.
Of course, if you can’t outrun the bear—or if your friends will no longer hike with you for fear of becoming bear-breakfast (or if you’ve already sacrificed them all)—you can always try out-thinking it. After all, humans may have puny teeth and claws, but have pretty good brains. One use of that brainpower is this: instead of fighting over how to go faster down a defined route, build a shorter route. That’s what Spread Networks did between New York and Chicago, and what Perseus Telecom and others are now doing between New York and London. So tear the latency map limb from limb and rebuild it, smarter, before it tears you limb from limb and leaves you to be picked off by jackals.
However, some believe firms aren’t using all their brainpower. At the same event last week, Gil Tene, chief technology officer and co-founder of Azul Systems, said few firms are asking the right questions of latency-reducing technologies, or know how to get the best results. “I think clients want to be sophisticated, but 99 percent of them are not. If I ask 10 customers what their response time is, I might get 10 different answers about how they characterize it,” he said.
Hence, just as there are guidelines for handling bears, the industry is working towards standards for defining, measuring and reporting latency—led by FIX Protocol’s FIX for Inter-Party Latency (FIPL) working group—without which, getting faster may be a much slower process.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Emerging Technologies
Waters Wavelength Ep. 304: Nice Actimize’s Lee Garf
Lee joins to discuss changes in communication platforms over the last few years.
DTCC revamps data distribution, collection efforts with cloud, AI
The US clearinghouse is evaluating the possibilities that cloud and AI offer to streamline the processes by which it collects and makes data available to market participants.
Agentic AI and big questions for the technologists
Waters Wrap: Much the same way that GenAI dominated tech discussions over the last two years, the road ahead will feature a lot of agentic AI talk—and CIOs and CTOs better be prepared.
Waters Wavelength Ep. 302: Connectifi’s Nick Kolba
Nick joins the show to give his views on trends in the interoperability space and the FDC3 standard.
AI co-pilot offers real-time portfolio rebalancing
WealthRyse’s platform melds graph theory, neural networks and quantum tech to help asset managers construct and rebalance portfolios more efficiently and at scale.
Waters Wavelength Ep. 301: SIX’s Javier Hernani
Javier Hernani, head of securities services at SIX, joins to discuss everything T+1.
Bloomberg debuts GenAI news summaries
The AI-generated summaries will allow financial professionals to consume more data, faster, officials say.
8 bank CTOs and CDOs sound off on artificial intelligence
Waters Wrap: Last year, WatersTechnology spoke with heads of technology and data from a range of tier-1 banks. Anthony pulls at one common thread from those interviews: AI.