Need to know
As Public Cloud Gains Traction, Concerns & Challenges Grow
Blockchain in the Capital Markets: Slow Progress in 2019
For the App Interoperability Movement, 2020 will be a Big Year
Let’s be honest: the term “machine learning” has been bastardized in recent years as some companies try to use it interchangeably with artificial intelligence, or even just to describe a souped-up Excel spreadsheet. But machine learning (ML) is a specific type of artificial intelligence (AI), with subsets for supervised learning (regressions, decision trees and random forests), unsupervised learning (clustering, principal component analysis…various forms of neural networks fall into both supervised and unsupervised) and reinforcement learning (Monte Carlo algos and the Markov decision process). And to be honest, there are lots of gray areas and bleeding and combining, which can make the subject of machine learning all the more confusing.
In 2019, there were over 100 articles on WatersTechnology.com that talked, in some form or another, about machine learning. Now, I’m the editor of this publication and I’ll be honest with you—some of these articles only scratched the surface. But I also believe that in the capital markets, the spread of machine learning is the most profound evolution for trading technology after the cloud.
Blockchain is an overhyped hammer-looking-for-a-nail that has yet to yield much value; machine learning is revolutionizing what traders, compliance professionals, portfolio managers, risk managers, regulators, and research associates can accomplish. It truly is a technology that, at its best, augments what an individual can accomplish, rather than wholesale replace the individual. (Though, let’s not be naïve: that happens, too.)
Below are some actual use cases that we wrote about in 2019. This is by no means meant to be a definitive showcase of ML-driven projects on Wall Street, but these are certainly some of the more interesting endeavors that we wrote about in 2019. Nor is this list in any particular order, though I did try to structure it between end-users, regulators and vendors.
Finally, these are relatively quick recaps of projects at specific institutions. For more information, click on the links. Hopefully, this list helps to show just how prevalent machine learning is becoming and potentially create some ideas to kick around with your teams.
If I’m missing something or have something wrong, shoot me an email: anthony.malakian@infopro-digital.com
Nasdaq
The exchange operator has rolled out a new market surveillance tool for finding patterns of abusive behavior. Underpinning the tech are three subsets of machine learning: deep learning, for analyzing extremely complex relationships and understanding hidden insights within massive data sets; transfer learning, which involves training new models based on older ones to allow scale and save time; and human-in-the-loop learning, which requires human interaction to weed out noise from signal, especially where it’s not cut and dry.
Currently, the tool covers only US equities, but Nasdaq has already run some trials on fixed income and with other exchanges, which have shown promise. The project had been in the works for more than a year and was born from the company’s initial forays into machine learning, which were focused on alert scoring in the Nordics.
“We had very positive results; we had a lot of learnings, and we deployed that,” says Tony Sio, head of marketplace regulatory technology at Nasdaq. “Based on that experience, we felt we could do a lot more. So after that, we really pushed the deep learning project against the low-level data coming from our trading system to detect patterns of abuse.”
But those first explorations into machine and deep learning bumped into another project. A separate team was working on how to create a visual picture of spoofing, and the missing ingredient also lay somewhere in machine learning.
“We called it the signature of spoofing,” Sio says. “We showed it to a lot different people, and people said, ‘Yes, I look at this picture, and I see it.’ And that coincided with some of the ML technologies out there, which are really about taking those visual patterns and being able to find them in the data. It’s two separate initiatives starting to overlap.”
JP Morgan
JP Morgan is using a new system that dumps conventional modeling techniques such as Black-Scholes and replication, in favor of a purely data-driven approach that is underpinned by machine learning. In 2018, the bank began using the new technology to hedge its vanilla index options books. For 2020, it plans to roll it out for single stocks, baskets, and light exotics.
Hans Buehler, global head of equities analytics, automation and optimization at JP Morgan, was one of the co-authors of a recently published paper on deep hedging. The research is part of an ambitious project at the bank aimed at using machine learning to hedge positions multiple time-steps ahead.
https://www.waterstechnology.com/trading-tools/4488371/machine-learning-takes-aim-at-black-scholes
“The real advantage is we are able to increase volumes quoted—because we are faster,” he says. “If you have to manually manage this, you have to divert somebody’s time and sit them down to focus on it.”
One senior quant calls JP Morgan’s approach a “base-level rethink” of hedging, which he says will benefit illiquid markets in particular. He estimates the technique has the potential to cut hedging costs for certain commodity derivatives by as much as 80%.
UBS
The Swiss bank is undertaking a project that uses machine learning to match information and find anomalies in customer information for know-your-customer (KYC) and anti-money-laundering (AML) reporting. UBS is working with unnamed partners to couple machine learning with natural language processing (NLP) that takes in data from public sources (disclosures and newswires, for example) and automatically connects it to customer information to find anomalies.
“We’re working with a few partners on this—I actually prefer not to disclose names, but we are already working with them,” says Mike Dargan, group chief information officer at UBS. “It’s something that’s ongoing; though with artificial intelligence, I don’t think you’re ever done-done. Ideally, you want to start early-ish in terms of what you want to do. It does need training to be better, so we are ingesting the data forms and then seeing the outcomes, and then comparing that to what we do already to get a better solution.”
He says the project—which has been underway for about one year—will eventually be rolled out to the whole bank and cover many of its activities while its partners are looking to offer the KYC platform to other banks, possibly as a utility.
https://www.waterstechnology.com/operations/4659726/ubs-taps-machine-learning-for-kyc
Brown Brothers Harriman
BBH is looking to bring greater efficiency to its net asset value (NAV) review process through the use of supervised machine learning. Securities pricing is reconciled each day at market close to make sure that the NAV figure is accurate, and the prices are reviewed to discover any significant variation day to day. Kevin Welch, managing director for investor services, says the process, when performed with traditional methods, resulted in a high proportion of exceptions, most of which were not true anomalies, but that nonetheless had to be reviewed by analysts.
Traditionally, if a security moved by a certain percentage, it generated an exception that needed review. Upon conducting an audit, they found that “we probably had 10,000 of these false-positives every single day that we had analysts going through,” he says.
To address the issue, BBH created a tool that uses supervised machine learning and predictive analysis to show how a security has moved against 800,000 others historically. “It will only generate an exception when the price is truly moving. So we have gone from 10,000 exceptions every single day that analysts needed to review, to under 500. We have eliminated 90% of the false-positives. This has been a key tool for us,” Welch said.
The algorithm is fed with historical data over several years that shows how the movement of a certain stock should correlate to all of the other securities in the bank’s fund accounting system. It then reviews all of the intra-day movements of all those securities versus one they expect to go up or down in price. If an anomaly occurs, say a stock price drops by 5% when it was expected to do the opposite, only then is it flagged for analyst review. Welch adds that having that correlation removes the need to spend a lot of time investigating normal market events, such as stock splits and responses to earnings reports.
ING
ING is in the process of spinning out a financial technology arm that will commercially produce its bond discovery platform for asset managers, a year after ending a phase where the bank was testing it out internally. The platform, called Katana, was first produced for use within the bank and launched in 2017.
Katana uses machine-learning algorithms to scan the European and UK bond market for possible pairs that have out-of-the-ordinary spreads or behave abnormally. It aims to help bond traders and asset managers find investment opportunities they otherwise could have missed out on. Santiago Braje, global head of credit trading at ING and the founder of Katana, says that in a pool of even just 2,000 bonds, there can be almost two million potential pairs, and a machine-led platform finds these opportunities much faster than a human can.
The platform uses data analytics and machine learning to find abnormal bonds and learn if these patterns could be an investment opportunity that an asset manager might be interested in. It looks at the historical prices of the bond and compares it to others in the portfolio. Once it’s identified potential investments, the trader gets an alert and it’s up to them to decide to pursue that trade.
“That alerts investors of possible opportunities and the impact is that investors find opportunities faster to start with and also find and execute trades that they would otherwise miss,” Braje says. “It’s impossible for the human mind to really go through all of those possibilities fast enough, to detect what actually requires attention or what seems like an interesting opportunity.”
Franklin Templeton
The mutual fund giant’s fixed-income team is working with vendor H2O.ai, using its Driverless AI product, which builds machine learning models that estimate the default risk of underlying loans in fixed-income assets, like mortgage-backed securities. Franklin Templeton wants to use the tool to predict bond defaults and model cashflows on other types of loans.
The fund manager came across H2O after Franklin bought a machine-learning credit investment firm that was using the vendor to analyze credit risk on small loans, says Tony Pecore, a senior data science expert at Franklin Templeton. “We really appreciated how they combined machine learning methods into their investment process,” he says.
https://www.waterstechnology.com/management-strategy/4492761/the-rise-of-the-robot-quant
Northern Trust
The custodian has developed a pricing engine that uses machine learning and statistical analysis techniques to forecast loan rates in the securities lending markets. “For this project, our data scientists applied a time-series algorithm to the problem of securities lending. Specifically, we have used some techniques inspired by Google technologies,” says Chris Price, a specialist enterprise architect at Northern Trust.
Time-series analysis harnesses a set of machine learning and statistical tools for predicting future conditions based on past data. Northern Trust’s algorithm uses market data from various asset classes and regions to project the demand for equities in the securities lending market. The firm’s global securities lending traders can combine these projections with their own market intelligence to automatically broadcast lending rates for 34 markets to borrowers.
While securities lending is largely automated, the pricing component for a subset of these securities is very labor intensive, as traders need to look at a particular line item of a security and understand where they should price that relative to supply and demand. Automation is required when traders need this process to be done on tens of thousands of assets. The key is to find the appropriate AI to fit the exact need of the trader, rather than trying to use machine learning as a hammer looking for a nail.
Morgan Stanley
The bank is experimenting with machine learning and other forms of AI to figure out which techniques are best for making suggestions as to which algorithms to use to trade equities in a particular market condition. “We are also interested in looking at the problem whether last night’s stock price, which jumped up, is going to continue or whether it’s going to fade away in the morning,” says Kerr Hatrick, executive director and quantitative strategist at Morgan Stanley Asia. “We’re looking at the problem of whether there are too many people saying the same thing in the market and we’re looking to use machine-learning techniques to identify that.”
It is also looking at trade-volume curves to better understand when to trade with the least friction. They want to use algorithms to “understand what is going to happen to the price over the next event, the next second, the next minute, the next half hour. And it may well be the different kinds of [AI] techniques that will be useful to tell you this,” he says.
Morgan Stanley is also using machine-learning forms to intelligently suggest indications-of-interest (IOI) to clients, based on their expected investment behavior.
The Federal Reserve Bank of New York
The New York Fed and those companies it oversees often have to go back and forth to hammer out misreporting that needs to be corrected. The regulator is looking to use machine learning that takes into account historical reporting from banks, as well as peer-to-peer comparisons to “triangulate and be able to predict” instances of misreporting and to streamline the correction process, says Sri Malladi, senior director of the regulator’s data and statistics group.
“We want to be at the point where we know that distribution expert reports can say this percentage will likely misreport this way or that way on these different sections of the report, or different line items,” Malladi says. “Right now, we are kind of at the point where we can squeeze out more. We’re not there yet [but] we can be. So I think that’s our north star, where we want to get to.”
He says that because of resource constraints, they “can’t look at every single piece of information” that is reported to the regulator. Machine learning’s strength is to look at a massive amount of data and find connections—when architected well. “So where do we focus our attention? Which data is potentially erroneous? So I’m looking at those capabilities,” he says.
Finra
The Financial Industry Regulatory Authority (Finra) is in the process of expanding its use of machine learning for market surveillance as it continues to refine its algorithms to trace manipulation.
The regulator uses machine-learning algorithms to detect spoofing and layering activities. Steve Randich, chief information officer at Finra, says that the regulator plans to increase the use of AI for surveillance to handle easier-to-detect instances of fraud, thus freeing up human surveillance professionals to focus on more complex instances.
“We have used machine learning to make sure that the handling and disposition of the alerts has a higher level of certainty in that judgment. We are training the machine to do what the humans do in terms of their initial judgment and intuition and let the algorithm do that. That’s exactly where we’re at,” he says. “Market manipulators are getting smarter so when they notice that they’re being caught they will change tactics. That’s why we still need humans involved. Our plan this year is to continue doing more on the behaviors most commonly used by fraudsters. The roadmap is to implement machine learning so that the human is doing less of the redundant, low-value work of invalidating false positives.”
Finra, which handles cross-market monitoring, historically relied on human judgment to determine if there was potential market manipulation, but Randich points out fraud patterns are now spread among exchanges and trading venues. Human analysts may have a harder time spotting patterns as fast as machines can.
Universal-Investment
The Frankfurt-based fund administrator is using machine learning as part of a broader, ambitious project to build a solution that will allow clients to purchase funds as conveniently as readers buy books on Amazon.
Right now, the process of buying a fund is slow, says Daniel Andemeskel, head of innovation management at Universal-Investment: different intermediaries, such as banks, transfer agents and custodians, are involved; their processes are still manual and paper-based; and settlement cycles can take up to two days.
The service is going to be based on the Ethereum blockchain (though it might use other protocols in the future) and will use predictive analytics to identify clients’ interests and allow sales personnel to offer them better options.
“In the future, we will add to that with artificial intelligence based on the same set-up that Netflix or Amazon is doing, [giving] recommendations to their end clients to support our sales people having much more pointed and clear strategies to address our product offerings to clients,” Andemeskel says.
It will begin beta-testing the platform in 2020, but the project also needs regulatory approval. And make no mistake about it—it’s not just German regulators that are beginning to take long, hard looks at machine learning.
https://www.waterstechnology.com/regulation/4555726/regulatory-uncertainty-hinders-ai-innovation
Linedata
Still early days, Linedata is working on a project that will automatically fix system fragmentation using machine learning. The application will monitor fragmentation—or the level in which memory allocation is broken up within a system that causes slower performance, which presents security and performance risks—within a firm’s technology infrastructure, and predict security failures.
“We started an innovation project mid-year—it kind of got a little taken over by the fact that we prioritized a different project in security services—but at the end of the year [2019], we’re going to focus on this again. I hope that by February [2020] we will have a model to test,” says Jed Gardner, senior vice president of Linedata’s Technology Services unit. “We’re looking to use monitoring toolsets to use machine learning to learn behaviors of changes in the technical environment to be able to apply fixes automatically without the intervention of an engineer—that’s our next big push.”
The vendor is experimenting with ML in other areas, as well. For example, one large banking client uses Linedata to identify patterns in trade pricing and position amendments, while others uses its ML algos for cycbersecurity services, particularly when it comes to monitoring access to end-user devices.
Nice Actimize
The surveillance and compliance specialist is in the process of testing out more machine-learning models to improve the results its Surveil-X surveillance and analytics platform produces. While this will include adding in more data points, charting capabilities, and visualization techniques, it is also using random forest, isolation forest, and variations of K-Nearest Neighbors algorithmic techniques to more accurately spot anomalies, which is being added to the platform. Lee Garf, general manager of Nice Actimize’s compliance business, says that for the K-Nearest piece, the initialize release of a new model “can take several months or longer.”
Garf adds that machine learning will be the basis for much of the improvements in the analytics platform in 2020, particularly as the technology already underpins many of its surveillance capabilities. By tweaking the models, the company will be able to set better parameters on what can trigger an alert.
“We’re not done with machine learning models—we introduced our third technique and we’re adding additional techniques as we go and learn,” he says. “A lot of it is experimentation to see what works well and what doesn’t and obviously push forward with what works well.”
That last piece is important: machine learning experimentation involves a lot of trial and error, which can seem like wasted effort, but is absolutely vital to improving an ML-driven model.
SmartStream
At Sibos 2019, SmartStream Technologies officially announced the launch of SmartStream Air, the firm’s cloud-native, AI-enabled reconciliations platform. The overall goal of the platform is to allow users to manage their reconciliation needs on an ad hoc basis, while simultaneously reducing reconciliations processing and configuration times.
Beyond that, though, this platform will also allow the vendor to better incorporate machine-learning techniques into its future rollouts, says Andreas Burner, chief innovation officer for blockchain and AI at SmartStream and head of the firm’s Innovation Lab in Vienna, which was responsible for Air’s conception and incubation.
“This is the birth of all our AI and machine learning products,” Burner explains. “We’ve been working heavily in this area for the past 18 months and now we’ve productized it through SmartStream Air—our first real AI product. We have been working with clients on it with their data and it has worked really nicely during our beta tests.”
Some examples as to how SmartStream is using machine learning include the ability to predict cash flows and liquidity, and they have a proof-of-concept that predicts when actual payments will settle and automatically deliver a timestamp for each forecasted flow.
https://www.waterstechnology.com/technology/4644671/smartstream-goes-to-market-with-fresh-air
Liquidnet
In 2017, Liquidnet acquired OTAS Technologies, a machine-learning specialist. In 2019, Liquidnet built a new business line that will combine OTAS with two other recent acquisitions: Prattle and RSRCHXchange. The new unit, dubbed Investment Analytics (IA), will combine Prattle’s experience in the field of natural language processing, OTAS’ use of machine learning in structured market data, and RSRCHXchange’s investment research platform. The target audience comprises long-term investors and analysts, says Adam Sussman, head of market structure and liquidity partnerships at Liquidnet.
IA is still in its early stages, so the firm is experimenting with how best to fit the product within investors’ workflows, and validate the concepts and models on which they are building. But some potential use cases have already been defined.
Predata
Predata uses machine learning and predictive analytics to anticipate global events and market moves. The platform, which has an AWS backend, is language-agnostic and can analyze websites in English, Arabic, Swahili, Persian and other languages. They have over 200,000 individual sources that they track daily, organized by topics, countries, and issues. It uses machine learning—mostly sparse regression techniques and algorithms—to identify patterns in the data and detect anomalies, changes in behavior, where people are interested or concerned.
Predata looks at mainly five sources of data. These are YouTube videos, which give an idea of interest around past events; Wikipedia pages for a sense of engagement around research; Twitter, which helps understand what people are interested in; individual websites to measure traffic levels on these websites; and Internet Service Provider data, which illustrates the actual flow of traffic. It doesn’t care about the content of the video or story, per se, but instead, it looks at how many people viewed and shared it, and how that has changed over the last few days, weeks or months.
“Especially when dealing with a black swan event, we’re not able to predict exactly what will happen, but we can quantify the level of interest around a group of websites related to this topic,” says Hazem Dawani, CEO of Predata. “Sometimes a hedge fund manager will have a hunch or a theory in their mind that they’re trying to build their portfolio around or manage their risk, we help them quantify these ideas and get confirmation or falsify these convictions that they have.”
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Data Management
In 2025, keep reference data weird
The SEC, ESMA, CFTC and other acronyms provided the drama in reference data this year, including in crypto.
Asset manager Saratoga uses AI to accelerate Ridgeline rollout
The tech provider’s AI assistant helps clients summarize research, client interactions, report generation, as well as interact with the Ridgeline platform.
CDOs evolve from traffic cops to purveyors of rocket fuel
As firms start to recognize the inherent value of data, will CDOs—those who safeguard and control access to data—finally get the recognition they deserve?
It’s just semantics: The web standard that could replace the identifiers you love to hate
Data ontologists say that the IRI, a cousin of the humble URL, could put the various wars over identity resolution to bed—for good.
The art of communication: Data pros need better messaging
As the CDO of a tier-one bank puts it, when there’s an imbalance in communication between the data organization and the business (much less other technology heads) “that creates problems.”
Does TP Icap-AWS deal signal the next stage in financial cloud migration?
The IMD Wrap: Amazon’s deal with TP Icap could have been a simple renewal. Instead, it’s the stepping stone towards cloudifying other marketplace operators—and their clients.
T. Rowe Price’s Tasitsiomi on the pitfalls of data and the allures of AI
The asset manager’s head of AI and investments data science gets candid on the hype around generative AI and data transparency.
Waters Wavelength Ep. 298: GenAI in market data, and everything reference data
Reb is back on the podcast to discuss licensing sticking points for market and reference data.