Need to know
Podcast Timestamps
5:00 Shameek joins the podcast and talks about the speed and focus that moving to a startup has given him.
10:00 While there is broad adoption of AI within capital markets firms, the depth may not be there.
11:30 The question cannot always be, ‘Why not use AI?’ It could also be, ‘Why do I use AI?’
14:30 Shameek says infrastructure for building and deploying ML models is still more art than science.
15:30 The barrier to AI in capital markets is the lack of trustworthiness and reliability of those models over time.
21:30 Not many firms have a mature data and infrastructure blueprint for AI innovation.
23:00 Shameek says that in some ways, the data translator role is a stop-gap measure.
26:00 How are regulators looking at the use of AI?
36:00 Shameek is excited about the use of data in addressing the current and next generation’s problems.
Shameek Kundu, former chief data officer at Standard Chartered, and now head of financial services and chief strategy officer at Truera, a startup dedicated to building trust in AI, joined the Waters Wavelength Podcast to talk about AI explainability and how regulators approach the use of emerging technologies.
One of the topics discussed was how regulators are approaching the use of AI and ML and how they could potentially introduce more prescriptive regulations around the use of these technologies within the capital markets. (26:00)
In April, US prudential regulators, led by the Federal Reserve, issued a request for information (RFI) on the uses of AI and machine learning. This move has led some to worry that new regulations could stifle innovation.
While Kundu believed that overall, regulators’ approach to AI and ML has been thoughtful and nuanced so far, if certain decisions are made against the use of some non-inherently explainable models, it could stifle innovation.
In response to the RFI by US prudential regulators, Kundu said there is a debate whether there is a place for only inherently explainable models versus non-inherently explainable models, otherwise known as post hoc models.
“My personal view on that would be there’s a place for both kinds of models. If you just limit it to the former, we will potentially inhibit innovation,” he said.
Examples of inherently explainable models include generalized linear models, generalized additive models, and decision trees, which by definition are inherently interpretable.
In comparison, non-inherently explainable models or post hoc models require explaining after predictions have been made or the model has been trained. Examples of these models are gradient boosted models and several types of neural networks.
Many image, text, and voice-related processing models fall in that category, he said. “There will probably be some categories where there isn’t an equivalent inherently explainable model that is anywhere close to the same level of performance today. That doesn’t mean it can’t change over time. But right now, there isn’t,” he said.
He explained that a workaround he’s seen some banks and asset managers use is to take the so-called ‘black box models’ to extract features as a pre-processing step and then incorporate them into the more inherently explainable models.
“In an inherently explainable model, you will not be allowed to say, ‘I don’t understand what happened in there,’ which means you need to know what, very simplistically, went into the funnel. And what you’re doing, in this case, is, you are deciding what to put into the funnel based on the output from a GBM, let’s say,” Kundu said.
“First, let’s try and justify what the GBM model said. Once we are convinced, now we can put it into our inherently explainable model as one of the factors for the decision making. So it takes away that regulatory or compliance risk because while a machine might have told you this might be a good feature, you’re actually assessing that yourself before you put it into the funnel.”
But again, he stressed that regulators aren’t out to stifle innovation.
“I genuinely think every regulator that I’ve spoken to—and probably across the world, there’s at least eight or nine major jurisdictions that I’ve spoken to on this topic—is approaching this in an extremely thoughtful and nuanced manner,” he said.
Taking the Monetary Authority of Singapore as an example, it has been three years since the regulator released a set of principles to promote fairness, ethics, accountability, and transparency (Feat) in the use of AI and data analytics in Singapore’s financial sector.
While there’s certainly regulatory guidance, as spelled out in the Feat principles, Kundu said there is yet a single prescriptive rule dedicated to the use of AI or machine learning.
Some jurisdictions may start coming up with more prescriptive rules, though. Even so, Kundu said the regulators’ approach has been “characterized by realism,” which is that this is an area that nobody has grasped fully, and it’s a space that’s rapidly evolving.
“I do think after two, three years of thinking about it, perhaps some of them will perhaps become more prescriptive in their guidance. But from every account I’ve had so far, it should not be something that stifles innovation too much. Of course, it will increase a level of governance and discipline as time goes by, but that’s to be desired,” he said.
Recent Waters Wavelength Podcast Interviews
Keiren Harris, founder of DataCompliance
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Emerging Technologies
Waters Wavelength Ep. 294: Grasshopper’s James Leong
James Leong, CEO of Grasshopper, a proprietary trading firm based in Singapore, joins to discuss market reforms.
The Waters Cooler: Big Tech, big fines, big tunes
Amazon stumbles on genAI, Google gets fined more money than ever, and Eliot weighs in on the best James Bond film debate.
AI set to overhaul market data landscape by 2029, new study finds
A new report by Burton-Taylor says the intersection of advanced AI and market data has big implications for analytics, delivery, licensing, and more.
New Bloomberg study finds demand for election-related alt data
In a survey conducted with Coalition Greenwich, the data giant revealed a strong desire among asset managers, economists and analysts for more alternative data from the burgeoning prediction markets.
How ‘Bond gadgets’ make tackling data easier for regulators and traders
The IMD Wrap: Everyone loves the hype around AI, especially financial firms. And now, even regulators are getting in on the act. But first... “The name’s Bond; J-AI-mes Bond”
Waters Cooler: AI tells it like it is… or does it?
A weekly round-up of stories from us and beyond. Plus, fun Scottish facts.
Google teams up with Linklaters on GenAI contract analysis project
While the large language model is unique to Linklaters and legal documents, Google believes financial services firms will also benefit from GenAI when it comes to contract analysis.
Man Group’s head of risk engineering doesn’t trust ChatGPT for managing risk
Risk managers have a duty to know how AI is being used within their firms. At a recent event, execs from Man Group and others discussed the benefits and pitfalls of AI in risk management.