Need to know
Podcast Timestamps
5:00 Shameek joins the podcast and talks about the speed and focus that moving to a startup has given him.
10:00 While there is broad adoption of AI within capital markets firms, the depth may not be there.
11:30 The question cannot always be, ‘Why not use AI?’ It could also be, ‘Why do I use AI?’
14:30 Shameek says infrastructure for building and deploying ML models is still more art than science.
15:30 The barrier to AI in capital markets is the lack of trustworthiness and reliability of those models over time.
21:30 Not many firms have a mature data and infrastructure blueprint for AI innovation.
23:00 Shameek says that in some ways, the data translator role is a stop-gap measure.
26:00 How are regulators looking at the use of AI?
36:00 Shameek is excited about the use of data in addressing the current and next generation’s problems.
Shameek Kundu, former chief data officer at Standard Chartered, and now head of financial services and chief strategy officer at Truera, a startup dedicated to building trust in AI, joined the Waters Wavelength Podcast to talk about AI explainability and how regulators approach the use of emerging technologies.
One of the topics discussed was how regulators are approaching the use of AI and ML and how they could potentially introduce more prescriptive regulations around the use of these technologies within the capital markets. (26:00)
In April, US prudential regulators, led by the Federal Reserve, issued a request for information (RFI) on the uses of AI and machine learning. This move has led some to worry that new regulations could stifle innovation.
While Kundu believed that overall, regulators’ approach to AI and ML has been thoughtful and nuanced so far, if certain decisions are made against the use of some non-inherently explainable models, it could stifle innovation.
In response to the RFI by US prudential regulators, Kundu said there is a debate whether there is a place for only inherently explainable models versus non-inherently explainable models, otherwise known as post hoc models.
“My personal view on that would be there’s a place for both kinds of models. If you just limit it to the former, we will potentially inhibit innovation,” he said.
Examples of inherently explainable models include generalized linear models, generalized additive models, and decision trees, which by definition are inherently interpretable.
In comparison, non-inherently explainable models or post hoc models require explaining after predictions have been made or the model has been trained. Examples of these models are gradient boosted models and several types of neural networks.
Many image, text, and voice-related processing models fall in that category, he said. “There will probably be some categories where there isn’t an equivalent inherently explainable model that is anywhere close to the same level of performance today. That doesn’t mean it can’t change over time. But right now, there isn’t,” he said.
He explained that a workaround he’s seen some banks and asset managers use is to take the so-called ‘black box models’ to extract features as a pre-processing step and then incorporate them into the more inherently explainable models.
“In an inherently explainable model, you will not be allowed to say, ‘I don’t understand what happened in there,’ which means you need to know what, very simplistically, went into the funnel. And what you’re doing, in this case, is, you are deciding what to put into the funnel based on the output from a GBM, let’s say,” Kundu said.
“First, let’s try and justify what the GBM model said. Once we are convinced, now we can put it into our inherently explainable model as one of the factors for the decision making. So it takes away that regulatory or compliance risk because while a machine might have told you this might be a good feature, you’re actually assessing that yourself before you put it into the funnel.”
But again, he stressed that regulators aren’t out to stifle innovation.
“I genuinely think every regulator that I’ve spoken to—and probably across the world, there’s at least eight or nine major jurisdictions that I’ve spoken to on this topic—is approaching this in an extremely thoughtful and nuanced manner,” he said.
Taking the Monetary Authority of Singapore as an example, it has been three years since the regulator released a set of principles to promote fairness, ethics, accountability, and transparency (Feat) in the use of AI and data analytics in Singapore’s financial sector.
While there’s certainly regulatory guidance, as spelled out in the Feat principles, Kundu said there is yet a single prescriptive rule dedicated to the use of AI or machine learning.
Some jurisdictions may start coming up with more prescriptive rules, though. Even so, Kundu said the regulators’ approach has been “characterized by realism,” which is that this is an area that nobody has grasped fully, and it’s a space that’s rapidly evolving.
“I do think after two, three years of thinking about it, perhaps some of them will perhaps become more prescriptive in their guidance. But from every account I’ve had so far, it should not be something that stifles innovation too much. Of course, it will increase a level of governance and discipline as time goes by, but that’s to be desired,” he said.
Recent Waters Wavelength Podcast Interviews
Keiren Harris, founder of DataCompliance
Further reading
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Emerging Technologies
Asset manager Saratoga uses AI to accelerate Ridgeline rollout
The OMS provider’s AI assistant helps clients summarize research, client interactions, report generation, as well as interact with the Ridgeline platform.
LSEG rolls out AI-driven collaboration tool, preps Excel tie-in
Nej D’Jelal tells WatersTechnology that the rollout took longer than expected, but more is to come in 2025.
The Waters Cooler: ’Tis the Season!
Everyone is burned out and tired and wants to just chillax in the warm watching some Securities and Exchange Commission videos on YouTube. No? Just me?
It’s just semantics: The web standard that could replace the identifiers you love to hate
Data ontologists say that the IRI, a cousin of the humble URL, could put the various wars over identity resolution to bed—for good.
T. Rowe Price’s Tasitsiomi on the pitfalls of data and the allures of AI
The asset manager’s head of AI and investments data science gets candid on the hype around generative AI and data transparency.
As vulnerability patching gets overwhelming, it’s no-code’s time to shine
Waters Wrap: A large US bank is going all in on a no-code provider in an effort to move away from its Java stack. The bank’s CIO tells Anthony they expect more CIOs to follow this dev movement.
J&J debuts AI data contracts management tool
J&J’s new GARD service will use AI to help data pros query data contracts and license agreements.
An AI-first approach to model risk management
Firms must define their AI risk appetite before trying to manage or model it, says Christophe Rougeaux