While calculating market risk indicators and credit risk measurements accurately has always been imperative for financial institutions, new calculations introduced by the Basel Committee on Banking Supervision through its revisions to the Fundamental Review of the Trading Book (FRTB) framework demand an unprecedented level of granularity and historical breadth.
Banks have already been contending with a significant increase in data for their risk management and regulatory reporting. That task has become even more difficult with the explosion of data volumes triggered by the new FRTB capital requirement rules, where banks calculate the amount of capital they must hold to absorb losses from market risk. At stake is the accuracy and speed of risk management calculations and regulatory reporting—which, in turn, have a direct impact on their data infrastructure and related costs.
Capital Calculation Methods
Banks must choose between two methods when they calculate capital under the new FRTB rules: a standardized approach or an internal models approach (IMA). There are many new complexities to calculating IMA beyond the requirement to align trading desk and risk management pricing. The result is a substantial increase in data volume, both transactional and historical. But the challenges do not stop there. There are also data management issues, such as the use of proxy data and managing rules across multiple jurisdictions, with full auditability and data versioning required throughout.
Since the methodology a bank chooses needs to be applied at the trading desk level, the results of the simulations for either approach need to be analyzed at the most granular level to avoid shortcuts that may call into question the pertinence of the decision. Banks, therefore, need to be able to simulate scenarios as well as adapt quickly to new situations. Not only does this mean analyzing or processing more data, it requires more flexibility from data management setups.
Many international banks have therefore been forced to review their data analytics solutions to help them tackle their data challenges at scale and provide their business users with the autonomy to perform any aggregation, calculate ever-growing datasets and manage exponential data growth more efficiently. All of this needs to come at a minimal cost without compromising performance or the volumes of data involved.
FRTB implementation presents a tremendous opportunity for banks to rethink their overall risk data structures, leveraging the full horizontal scalability offered by new technologies. In the past there was no choice other than to have distinct market risk and credit risk data structures, with several datasets for each. This was due especially to the limitations of in-memory technologies, which have been forcing banks to split between yesterday’s data and historical data, normal datasets and stressed datasets, and so on.
When technology opens a new window of limitless opportunity, why stop there? Why not rethink the entire organization of the data structure? It becomes possible for risk managers to see and report a country’s contribution to value-at-risk, exposure at default for the same country, the dollar duration and much more, such as profit-and-loss information. In one click they can analyze all of these numbers and look at the trends. No longer is access to the data delayed because it is located in several datasets, nor do joint reports need to be compiled manually from the inputs of multiple sources. With an end to multiple datasets, the duplication of data and the high running costs implied from storing redundant data are also eradicated.
Data Storage Optimization
Opensee has been working with one large bank to redesign its entire data structure and data model with this vision in mind. The first step was to design a data model with real-time access so users can request information on market and credit risks, irrespective of the granularity or history of the data. This involved combining eight very large datasets totalling several hundred terabytes. It immediately removed at least 20% of the data points, which were duplicates, significantly reducing storage costs and operational risks from errors between the datasets through a streamlined adjustments process. Thanks to an efficient abstraction model layer, which removes the complexity of the datasets, users don’t require a knowledge of the data model when calculating regulatory ratios, but instead understanding the various risk exposures from multiple angles and enriching their dashboards with relevant information.
This example illustrates how a single platform that optimizes daily data storage and is scalable can enhance the entire risk management process for banks. Through longer historical data ranges, banks can keep and build more meaningful trend analyses, ensure data consistency between stress-testing exercises and daily risk managements, and ultimately offer their users more data capabilities with lower operational risks and better data quality.
Tackling the exponential growth in data volumes opens the door for banks to rethink the entire data structure, making it more efficient through real-time self-service analytics on all their data at a lower running cost. FRTB may turn out to be a blessing in disguise.
Sponsored content
Copyright Infopro Digital Limited. All rights reserved.
As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (point 2.4), printing is limited to a single copy.
If you would like to purchase additional rights please email info@waterstechnology.com
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. As outlined in our terms and conditions, https://www.infopro-digital.com/terms-and-conditions/subscriptions/ (clause 2.4), an Authorised User may only make one copy of the materials for their own personal use. You must also comply with the restrictions in clause 2.5.
If you would like to purchase additional rights please email info@waterstechnology.com