Sponsor's Statement: Be in Control

john-randles

The common thinking behind implementing a reference data management system is that there are two approaches: build your own data model or buy one from a commercial vendor. The premise for buying a commercial data model is that such as product is a superset of all the fields, attributes, relationships, vendor adaptors, etc, of the client¹s requirements to make implementation faster than building it yourself. The data model will then be maintained and upgraded by the vendor as new feeds, asset classes, relationships, identifiers and downstream systems need to be supported. All sounds good!


The important thing to understand with this proposition is the market segment the solution is relevant for. If we are talking about smaller asset management companies and hedge funds with a few feeds and asset classes, the commercial data model might be a good fit and have the required superset of attributes, adaptors, relationships, etc, built into its model. However, if the problem seems too small (say a couple of feeds for a couple of asset classes), then it might be easily addressed by a cheaper home-grown solution, not a mallet to crack a small nut.


Conversely, at the other end of the market, in global investment banks, custodians and investment managers, the likelihood of the commercial data models having a superset of the data requirements is extremely low. This is particularly true when you consider the impact of globalization and the growth in importance of emerging markets and niche/local data vendors on the Tier 1 organizations. The footprint of these organizations across the world will never be matched by the ³superset² theory of the commercial data models.


Another major issue with commercial data models is how these models are managed through the full lifecycle of upgrades and evolutions over 15-­20 years. A core element of the commercial enterprise data management (EDM) value proposition is based on having a set of data vendor adaptors maintained and upgraded together with the underlying model (can¹t have one without the other). This requires the underlying data model to be upgraded to support the upgraded adaptors and vice versa.


It is interesting to contrast the data model upgrade experience with other domains, such as the world of enterprise resource planning (ERP) applications. Reference data in financial services firms can be easily defined as being very dynamic (multiple vendors/data sources, identifiers, cardinalities, and conflicting classifications with lots of changes driven by data vendors, the business and regulators). Contrast this with the ERP world (relatively static inputs determined mainly by the application vendor) and you can see it is a very different data management problem and therefore a data model maintenance problem.


The maturity and capability to upgrade ERP applications isn¹t just a function of the billions of dollars spent on R&D (that helps obviously) and almost 40 years of experience. The fundamental difference is that in ERP implementations the application vendor largely controls what data is input, stored and managed, making it a vastly simpler data management issue. The underlying ERP data problem requires periodic large-scale upgrades to the data model, or major releases that are in themselves large-scale projects.


The EDM vendors have for too long tried to make the world believe a far more complex data problem to ERP could successfully mimic its approach to large-scale upgrade of data models. This is why you don¹t often hear of a routine upgrade to a security master implementation, but it turns into a new implementation and vendor selection process. And no matter how much a firm ³implemented according to best practice and vendor guidelines,² it is not going to be a straightforward upgrade as advertised. Think about it, in the ERP world, the application vendor controls the data inputs and upgrades are notoriously difficult. In reference data management, nobody controls the data inputs (N-different data vendors), so upgrades are by definition more complex than the most complex ERP upgrades. So, is the easy upgrading of vendor reference data models an unsolvable problem? Quite possibly.


This is what attracts globalized firms to build and maintain the data model themselves. Without a virtually automatic upgrade path (which doesn¹t exist), a commercial data model can be a liability. It can constrain the business and make it run to someone else¹s timeline and priorities. Large firms need to own their model and control its destiny, not for competitive advantage as argued in the past, but to keep control of its evolution for a globalized business. They also understand the difficulty in a model upgrade path and do not want to be tied to monolithic releases promised at some stage in the future. Their businesses are just vastly more complex than anything they can buy off the shelf. Also, very few large firms think they can learn a lot about data modelling the securities business from a third-party software company, and they are probably right.


At PolarLake we believe there is a third way. If you would like to learn how these classic problems can be addressed through genuine technology innovation and the use of semantics technologies please get in touch. Visit our website at www.polarlake.com.

John Randles is CEO of PolarLake.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

The AI boom proves a boon for chief data officers

Voice of the CDO: As trading firms incorporate AI and large language models into their investment workflows, there’s a growing realization among firms that their data governance structures are riddled with holes. Enter the chief data officer.

If M&A picks up, who’s on the auction block?

Waters Wrap: With projections that mergers and acquisitions are geared to pick back up in 2025, Anthony reads the tea leaves of 25 of this year’s deals to predict which vendors might be most valuable.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here