Data management issues have always presented challenges for the financial services industry. Here, DTCC explores why, with the right technologies, disciplines and structures in place, there is now reason for optimism.
There is a common challenge linking all capital markets firms, regardless of size, location, their level of complexity or the collective experience of their IT teams: data management. When it comes to data management, there is no set-and-forget or one-and-done proposition. It is a continuum along which firms move towards greater levels of quality, transparency and standardization, with no finish line. The challenge facing firms is further exacerbated by an industry riddled with legacy and proprietary platforms, and an array of data standards that, due to their very existence, means there are few standards to which all firms across the industry subscribe.
This is the context framing the Depository Trust & Clearing Corporation’s (DTCC’s) recently published whitepaper, Data strategy and management in financial markets, which outlines the challenges facing capital markets firms and proposes feasible strategies to address those problems. The paper focuses on four key challenges prevalent in the industry today:
1. Overlapping standards impinging on the efficacy of firms’ data exchange activities.
2. Data fragmentation issues and limited business context of data impacting firms’ abilities to fully realize the latent value within their data.
3. Complex and legacy IT infrastructure leading to substandard data management practices.
4. Poor overall data quality, which impacts firms’ downstream systems and processes.
Impetus
Kapil Bansal, managing director, head of business architecture, data strategy and analytics at DTCC, explains that, in the aftermath of the global financial crisis that began in 2007–08, regulatory bodies introduced a set of regulations that either directly or indirectly required capital markets firms to focus on their data strategies as an organizational priority.
“That created the impetus for firms to start focusing more closely on their data,” he says. “Whether it was post-trade processes, liquidity and risk management, or regulatory reporting, they all had underlying data availability, accuracy and reporting as key enablers. Fifteen years on [from the financial crisis], we still have regulations like the Central Securities Depositories Regulation and the UK’s Securities Financing Transactions Regulation, which have a huge impact on data quality and management.”
Legacy issues prevail
As firms seek to generate incremental revenue via new products, or look to serve their clients better with increased transparency and business insights while simultaneously reducing their operational risk, data becomes central to those imperatives. However, getting data out of disparate systems—especially legacy systems, which are still prevalent across the industry—is a challenge for even the most tech-savvy firms. “Getting data out of legacy systems does have a friction cost,” Bansal explains. “Often the way the data is managed prevents organizations from leveraging that data to drive new business insights.”
It goes without saying that the capital markets landscape is complex. It is underpinned by an all-to-all model, allowing firms to trade and exchange information with each other. There are numerous service providers, each with their own areas of specialization, buy-side firms have multiple custodians, and there are large numbers of broker-dealers and clearing houses through which firms clear trades across a variety of asset classes. This means data needs to flow between all the entities across the global markets involved in any form of market activity, ideally with the minimum amount of friction. “There is a significant amount of data exchange that’s required and, while there has been a move towards common data standards, those standards are still fragmented, which creates a hurdle in terms of how effective the data exchange is,” Bansal explains. “There are also overlapping or duplicative standards, which means firms spend quite a bit of money trying to exchange data with each other and then bringing that data into their own organizations and converting it into their preferred business context and format.”
Then there’s data quality, which, according to Bansal, needs to be a priority for firms at the very inception of any data strategy initiative. “Even then, it’s not a one-and-done proposition,” he explains. “You have to invest in maintaining high data-quality standards across the value chain because, if the data is not accurate and it’s not correctly maintained, then it won’t drive the functions for which it is intended.”
Data as an asset
Capital markets firms typically sit on large volumes of data, which is invariably distributed across disparate systems and locked away within legacy platforms, making accessing and sharing it throughout the business a challenge. Clients are increasingly looking for quality data in real time and in the format they can consume with least friction. Bansal suggests that approaches to data need to change—it must be considered an asset or product in its own right, with its own value chain, he says—for firms to derive genuine value from it.
“Data is created, then it is transformed by, for example, adding additional insight to it. Then it is distributed, reported and maintained,” he says. “So, there is a value chain and, to drive value out of data, you have to think about the decisions you’re going to make as an organization, the infrastructure you’re going to establish and the policies you are going to implement across the entire value chain. If you don’t focus on any part of the value chain, you will fall short as an organization in driving value in your data. But first, you have to establish a North Star of your data strategy at an enterprise level and strategically prioritize aspects of that North Star in alignment with business priorities.”
The way forward
The infrastructure to which Bansal refers is determined by the decisions individual firms take with respect to how they want to organize their data—for example, the extent to which they choose to centralize their data and the type of data they centralize—which is as much a strategic and data governance issue as it is about pure infrastructure and enabling technology.
“Should firms set up their entire infrastructure/technology landscape so that it’s centralized or decentralized? Firms have tried different ways,” Bansal says. “When it comes to data capabilities an enterprise strategy and end-state makes sense. But then there are business-specific requirements, which need to be addressed via capabilities that are flexible enough to be decentralized. So, the answer lies somewhere in between.
“Firms should think about data strategy and data capabilities at the enterprise level to drive harmonization and efficiency in infrastructure costs. To that extent, a centralized view of data strategy makes sense. Having said that, data should be made available to businesses in a flexible manner so respective business insights can be drawn on in a decentralized manner.”
APIs and the cloud
Bansal explains that the answer to the question about the technologies firms can use to manage their data better from inception to final consumption and analysis should be prefaced by a clear understanding of the type of operating model they are looking to establish—centralized, regional or local. It should also be answered, he says, in the context of the data value chain, the first step of which entails cataloging data with the business definitions at inception to ensure it is centrally located and discoverable to anyone across the business. Firms also need to use the right data standards to ensure standardization across the firm and their business units. Then, with respect to distribution, they need to establish a clear understanding of how they will send that data to the various market participants they interact with, as well as internal consumers or even regulators. “That’s where application programming interfaces [APIs] come in, but it’s not a one-size-fits-all,” he says. “It’s a common understanding that APIs are the most feasible way of transferring data, but APIs without common data standards do not work.”
Bansal sees cloud infrastructure as playing a critical role when it comes to storing, cataloging and distributing data, which he says has the ability to allow firms to decouple themselves from their legacy platforms. However, he warns that, unless they invest up front in their data quality and cataloging, they will not be able to realize the full potential of their data, regardless of whether it is in the cloud or not. “There are lots of technologies available that are dedicated to driving data quality, data cataloging, data maintenance and data reporting, and then there is the cloud infrastructure enabled by APIs,” he says.
“The advantage of the cloud is that, when you have invested in your data and it’s been subject to the right quality checks and cataloging, multiple users with completely different use-cases can derive value from it very, very quickly.”
Sponsored content
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com