Flood of Factors
Q&A with Deloitte's Dilip Krishna about risk data aggregation

Does it make sense to divide up risk data and evaluate or inspect it before aggregating it?
Risk data usually originates elsewhere in the organization, as booked trades, originated and serviced loans, etc. It is enriched in a number of ways, most pertinently by adding risk metrics to it. To ensure high levels of risk data quality, it is essential to ensure the raw input itself has high fidelity. Additionally, high
quality requires the aggregation process to be free from corruption, so both of these are necessary conditions to ensure the ultimate accuracy of risk data.
How should risk data be divided and organized to those ends?
Risk data has several components. The base input is the current actual financial state of the organization as represented by trading positions and loan balances. Risk metrics also depend on other important information such as client, facility and collateral information. In addition, to develop models for risk management, it is critical to have a sufficiently long historical record of such data (e.g., five years of loan history). Finally, external data may also be required to supplement internal historical data (e.g. operational loss history data).
Are the stress-testing requirements of CCAR and BCBS 239 driving more attention to risk data aggregation and getting more done in that regard?
Stress-testing requirements are driving significant changes in risk data aggregation infrastructures. These requirements go well beyond generating risk reports, and demand that banks perform a meaningful analysis on both inputs and outputs of stress tests. In addition, there is a timeliness requirement that is hard to meet. These requirements are usually difficult for banks to meet with existing infrastructures, prompting their focus on risk data aggregation systems. Since BCBS 239 is consistent with these requirements but states them more explicitly, both requirements are together driving more coherence in risk data aggregation infrastructures.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe
You are currently unable to print this content. Please contact info@waterstechnology.com to find out more.
You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com
More on Data Management
Orchestrade resists SaaS model in favor of customer flexibility
Firms like Orchestrade are minimizing funds and banks’ risks with different approaches to risk management.
Hyperscalers to take hits as AI demand overpowers datacenter capacity
The IMD Wrap: Max asks, who’s really raising your datacenter costs? And how can you reduce them?
New FPGA component aims to curb co-lo costs
Hardware ticker plant provider Exegy is working on a new FPGA solution that it says will free up costly processing power on firms’ existing co-lo servers.
Market data woes, new and improved partnerships, acquisitions, and more
The Waters Cooler: BNY and OpenAI hold hands, FactSet partners with Interop.io, and trading technology gets more complicated in this week’s news round-up.
Asset manager Fortlake turns to AI data mapping for derivatives reporting
The firm also intends to streamline the data it sends to its administrator and establish a centralized database with the help of Fait Solutions.
New study reveals soaring market data spend led by trading terminals
The research finds that 2024 was a record year for overall market data spend, supported by growth in terminal use, new license schemes by index providers, and great price variation among ratings agencies.
The murky future of buying or building trading technology
Waters Wrap: It’s obvious the buy-v-build debate is changing as AI gets more complex, but Anthony wonders how trading firms will keep up.
‘I recognize that tree’: Are market data fees defying gravity?
What do market data fees have in common with ‘Gilmore Girls’ and Samuel Beckett? Allow Reb to tell you.