IRD Reference Data Technology Special Report—Virtual Roundtable

From Mainframes to Facebook

web-cover

Reference data technology vendors regularly push out new tools. What new functionality do you expect to see available in the market in the coming years?

Ranko Batljan, managing director, asset-servicing information technology, BNY Mellon: If I just modify this question a bit and talk about features I would like to see available, two things come to mind.

Firstly, this feature would help establish data lineage and improve governance. Data is considered intellectual property and, as such, there is a big push for making sure entitlement is managed and use is monitored down to the field level. For example, there is a need to know the source of every field on the golden copy.

Another more complex example would be to keep track of all vendor-originated fields that are used in decision-making. Say based on country of issue and difference between first coupon date and maturity date we can choose to define an internal asset type and use it for additional decision. That additional decision could be the type of cash payment (dividend or fixed interest, for example) that gets sent downstream.

In general, at every transformation point we should be able to capture data elements that are participating and their respective sources. That would allow for determining the sources of every derived data element that is distributed. This quickly becomes a complex multi-dimensional model—using data from multiple sources to populate one golden copy field, or using multiple golden copy fields to derive the additional field.

Based on capturing this information, we could build a powerful management information system and use it to determine what data elements are being distributed so we can do correct cost allocation internally and externally. We could then study vendor sources, understand the cost structure, come up with alternative sources and manage our costs more effectively.

Secondly, accurate securities cross-referencing is an important area of security reference data management. While the expectation is for data vendors to provide inputs for this, technology vendors could help in presenting and mapping this data. Being able to visually explore relationships between all security identifiers, exchanges, currencies and places of settlement would be a useful feature.

Robert Brachowski, reference data management product manager, Eagle Investment Systems: There are a lot of tools being developed today, but the ones we see as garnering the most attention contain new functions that streamline workflow, cut out manual intervention and result in time savings. Some of the new functionality will enable users to view the entire workflow of reference data—from being requested from the data vendors to being sent to any downstream systems. It also provides notifications at various points through this process. 

Additionally, functionality is emerging that allows users to better understand all components of reference data as a whole, rather than individual components. This functionality provides users with a snapshot view of its reference data, such as ratings, schedules and different market data, and allows them to glance over different types of data and see issues without the need to delve into each type. We are also starting to see more functionality around streamlining assignment processing. Clients have been asking for their systems to send alerts or query screens throughout the lifecycle of the data.

Lastly, there has been an increase in activity around third-party integration. Today's market is more demanding, and vendors are exploring their options of aligning with industry standard functionality to accelerate the time to market with new features. 

Tony Brownlee, managing director, Data Solutions Kingland Systems: For the types of data we often think of throughout the industry, I expect to see tooling that enables better hierarchy management, better matching of client and counterparty data, and more efficient distribution of data. On the cutting edge, we are seeing tooling that enables the management of unstructured data as a critical area of progress, as well as tools that identify and infer changes based upon data of interest such as specific clients or securities.   

Martin Cole, managing director, Six Telekurs: We expect to see an increasing number of services that compare data from different sources, potentially in XBRL, and help businesses produce golden copies of reference data for their enterprise-wide use. As a data vendor, we already work closely with a number of partners all over the world who provide ‘scrubbed' corporate actions. We are monitoring closely the use of XBRL and especially where it can be used for corporate actions reporting. The whole industry would benefit greatly from being able to reduce the burden of interpretation needed to process corporate actions announcements from issuers. It can only have a positive impact on the already award-winning quality of our corporate actions data. At the same time, we are seeing more demand for speed from the market—faster corporate actions in particular.

Brian Sentance, chief executive, Xenomorph: From a business perspective, I believe the most likely pattern for data management innovation for coming years will be driven by the need for improving risk management, achieving regulatory compliance and dealing with the business model changes that result from these regulations.

The following themes will support these business drivers:

Detail – far more granular detail on data should be available when required by internal processes and external compliance.

Front-to-back data management – to achieve data consistency, and reduce costs and operational risk, data management needs to be extended into the front office from the middle and back office.

Integrated data management – a more integrated approach to data management across all types of data and asset class is essential to reflect how users think about data and what business processes need in a multi-asset trading environment.

Data models – more standards will be adopted but clients still need the flexibility to add new attributes and asset classes themselves, without reverting back to the system vendor.

History and auditability – the need to 'reproduce the world' as at a point in time will increase in importance in explaining to clients and regulators why something was done and how it was done.

Real time – the processes driving data management are becoming more real-time and, as such, data management will have to change from being mainly overnight batch-driven to something more automated and near real time.

Analytics management – deriving data from data should also be part of the data management process, not something external to it.

More of everything is the summary picture, but one where the successful data management vendors will focus hard on how to de-risk data management projects for clients.

Ed Ventura, president, Ventura Management Associates: I believe there will be a number of new tools in the reference data space as technology and business continues to evolve. Looking back over time can help to define the next steps within the industry. I recently spoke at a CRM forum for asset managers and had the chance to take 'a walk down Memory Lane', where I explored the transition from mainframe transaction processing during the 1970s to the introduction of mass-produced personal computers during the 1980s; they served to analyze what the mainframes held. That was followed during the 1990s by distributed processing, which could handle the mainframes and PCs both processed together. During the 2000s we realized that distributed processing was redundant so we made a push for centralized data, be it physical or virtual.

Now we see technology freely available in one's pocket, often with the capability of the early mainframes. They are being used for information, communication and entertainment—all of which is predictive of reference data capabilities and use. While looking at this trend, I'm thinking reference data vendors will probably offer tools that will push information into one's pocket and allow for a Facebook-type 'conversation' with the data provider. Imagine knowing in real time while you're in your automobile that a press release has taken place for an oil spill and that you own bonds of that company through a hierarchical relationship. At that point, you might want to drop a trade of that security, so the information that was pushed to you will enable you to perform an action.

Data managers are increasingly talking about a need for multiple golden copies. What are the best strategies for implementing this?

Batljan: When it comes to maintaining multiple golden copies, a condition that comes to mind is the need to preserve the integrity of all the data sources. In other words, any manual updates/overrides should be maintained separately from electronic records that originate from vendors. By doing this, the evaluation of golden copies to be done independently of data sources update, that is, change in underlying data, can trigger evaluation.

Next, every field must be addressable so different rules can be built, based on different fields and different consumers of golden copies. This allows for flexibility to include different fields for different consumers. Further, rules can be used to enable golden copies to include different data sources, based on the specification given by the user of the golden copy.

These points illustrate the complexity of maintaining and building multiple golden copies. Depending on circumstances, there are sometimes other alternatives that can be considered. They usually consist of maintaining a single golden copy with an additional processing at the point of data delivery.

One alternative is to do overrides/exceptions in the interface layer while maintaining a single golden copy. Another could be accomplished by responding to specific business requests for data from a particular vendor, even if that value did not make it to the golden copy.

The bottom line here is that the integrity of data sources needs to be preserved to get the maximum use of the data. This provides a good base to work with, either in strategic direction or to just quickly meet the business need with minimal costs.

Brachowski: The best strategy for implementing multiple golden copies starts with understanding the need for multiple golden copies. This will help to identify what data elements need to be unique in each golden copy and which ones should be consistent across all golden copies.  Multiple golden copies are typically only necessary for subjective data elements, such as descriptive values or classifications. They are typically not required for objective data elements, such as coupon or maturity date. The next part of the strategy is to determine who is responsible for the different data elements. Objective data elements should be managed in a central repository to ensure consistency across all data consumers. Subjective data may also be managed through a central group to reduce the possibility of requesting the same data from a single vendor multiple times by different business units.  

Brownlee: The phrase 'multiple golden copies' is misleading, but the fundamental business need to manage different business unit preferences is real and growing.  Master data management (MDM) strategies, as they've matured in recent years, support these business needs. We suggest a reference data system should be implemented aligned with an MDM strategy where all data sources are stored and mastered in a way that allows for providing both managed reference views ('golden copy' or 'authoritative data') and source-specific federated views of the data. This strategy gives consumers from different lines of business many options for how they receive their data.

Cole: Multiple golden copies are difficult to manage because they are generally fed by sources with different data dictionaries and have multiple downstream systems feeding from them. Recent risk management rules and accounting regulations now mean managing the interactions between such systems is a key challenge. To this end, strong, well-maintained cross-referencing functionality will need to be at the heart of any firm that has more than one golden copy. Probably the easiest way to achieve this is to make incremental changes to each golden copy to make sure the same (industry-standard) coding schemes are used in each – even if the attributes need to be different, the coding that refers to instruments and entities can be harmonized in each system, making it much easier for cross-referencing between systems.

Sentance: Everyone talks about establishing a 'single version of the truth' but the reality for many is that 'truth' is a subjective thing, often determined by external bodies such as regulation and accounting standards, and further exacerbated by the needs of the business. For example, when looking at portfolio valuation from an accountancy point of view, the standards might insist on only using traded or quoted prices of appropriate size. This approach, particularly if you are forced to use non-traded broker quotes, can result in very different numbers from theoretical valuations in risk management, where more consistency with other market prices is instead desired (and understood by regulators), even if this means using more theoretical data/models. Xenomorph's approach to this is to combine the possibility of multiple sourced data (for example, a 'price' or 'rate' property that can have multiple values determined by a 'data source' qualifier), with granular property and data source access permissioning (meaning that only certain groups of users can access/change certain properties of particular data source type). Hence multiple copies can exist side-by-side in a controlled manner, but without duplicating databases, data structures or indeed the data itself.

Ventura: I might be the cynic in this, but I fail to see how anyone can consider having multiple golden copies. If I am cloned based upon my genetic structure, does that make the clone the same as me? Of course the answer is No – the second we both look at something we are processing it differently; even standing next to one another will give us different views and data to process. There will always be variances in multiple data bases, no matter how hard one tries to maintain congruency. The purpose of a golden copy is to have a singular, authoritative source of information. More than one source will skew the data maintained. I believe that a single source is the only way to approach a golden copy. That doesn't mean a huge physical database, but a designation of what the authoritative source is for the entity that is being used as part of a virtual golden copy. 

The convergence of market and reference data for risk management systems has been a key theme in the past year. What have system providers and their clients done to meet this need?

Batljan: Recent economic developments have brought a lot of attention to the subject of risk management. One particular area that also employs reference and market data is the area of counterparty risk management. The general opinion is that the technology to do quick risk profiles and exposure assessment—by combining and processing market data and reference data—already exits. Two aspects that have been lacking are the existence of the standardized numbering system to uniquely identify counterparties and complex corporate structures behind them.

Another aspect is the motivation for companies to carry out diligent risk management. When times are good, it is extremely difficult to make a good business case for spending time and money on these systems. Recently we have witnessed some developments when it comes to standardizing legal entities identification. This, together with existing technology capabilities, will help companies manage their risk effectively.

Brachowski: Over the past year, clients have shifted focus from pure data acquisition and quality to making sure that data is being transformed into useful and accessible information. Having the correct reference data is no longer enough; the data must be easily consumable by the systems or users requiring the information.  Clients want consistent data that they can use for risk management, reporting and other uses. This has been a tremendous differentiator for Eagle, which within a single solution can provide accurate reference data and information delivery components necessary to deliver exposure reports and distribute this information to other investment management systems. Wider integration with tools such as information delivery is extremely important. Clients need to deliver information to their audiences in a timely manner. Without information delivery, the best data in the world offers little value to clients.

Brownlee: There has been a significant focus on flexibly managing and distributing more types of data. For example, with our new 360 Data MDM suite, we have taken software that supports thousands of customers around the world managing customer, product and account data, and expanded it to manage securities data and hierarchy and other types of relationship data all from one platform. Enterprise risk initiatives require this type of expansive flexibility and scalability to provide accurate data to downstream risk management systems.  We have seen cases where using systems to manage this data have helped clients identify millions of dollars in additional risk concentration in particular client groups, previously unidentified.  

Cole: SIX Telekurs is in the fortunate position of having a fully integrated market and reference data offering. All our data is processed through a single system, which means that whichever product or solution a customer uses from us, they can integrate it simply with any other data set we provide. We are seeing the need for faster corporate actions. Our VDF Pulse service, which will be released this year, will cater for this need.

Sentance: Risk managers and the systems/processes they use need access to all types of data (such as reference, market, entity, positional and derived) across all asset classes. So in many ways the risk management department is at the centre of the data needs of a financial institution, and this is only likely to get worse as regulators add requirements to mitigate systematic risk as well as individual institutional risk. Additional recent requirements include the ability to drill into the detail of any particular reported number, implying the need to get access to data in a far easier and more granular manner than before the crisis.

Regulators remain focused on the quality, auditability and timeliness of data feeding risk management engines, and clients are becoming more demanding in terms of the granularity and timeliness of the risk and valuation reports they request. Spreadsheets management of data and analytics remains a problem at many institutions, from the point of view that this produces a whole world of data that is outside the data management mainstream, hence uncontrolled and unaudited. The pricing analytics used often produce the most valuable data of all—the derived data for curves and valuations—and this needs managing too, that is, not just the standard reference and market data sets but more complex data objects such as spread curves and volatility surfaces. Over the past year Xenomorph has been working with a number of institutions in the area of data management for risk and valuation, and we are pleased to say our approach to the management of both data and analytics within one single system is producing productivity benefits for our customers, reducing data costs and helping them reduce the regulatory capital requirements.

Ventura: There seem to be some really cool new products on the market. Most of which are geared to identification of counterparties, hierarchies, corporate actions and family trees. These are helpful in identifying overall risk and in knowing what will affect your portfolios. Imagine if they were in use before the meltdown!

Many data managers find themselves in a position where they have inherited data management systems, and they can not disregard the investment that has already been made. How can clients leverage existing investment, but combine systems with vendor products to improve reference data management?

Batljan: As with every established business activity, priorities should be to always make sure existing data management solutions are functioning while finding new and creative ways to meet the future demands. As the markets are evolving and new instruments are traded, settled and reported to customers, there is always work to keep the model up to date while maintaining what already works. Data management is the process of continuous improvement. In cases where legacy platforms are not flexible and agile enough, there are always new products that can help with integration. Web service, various screen emulators and transformation tools can all be used to harness the power and leverage the existing investment in legacy platforms.

One way to be successful is to 'choose the battles' by freeing up resources that can deal with issues in a creative way instead of rebuilding the systems that work already.

By focusing on issues that are closer to the consumers of data, such as improving data distribution, providing new presentation layers and cutting down time to market, we can leverage the existing investments and provide efficiency.

Brachowski: The ideal situation would be to have one data management system within an organization.  However, due to budgetary constraints or system requirements, this is not always possible. When using multiple systems together, clients must focus on the hand-off of data and workflow between the multiple systems.  Ensuring timely and accurate sharing of data across systems is the largest potential point of failure.  New data management systems can help integrate with other solutions by standardizing the communication between systems and by offering the flexibility to perform in a modular environment. 

Also, systems that manage the workflow components of data management should be able to integrate updates and statuses from external systems. When looking to identify where to start replacing a component of a company's data management solution, the starting point should be to work on critical projects causing the most pain, continue to increase use of the more open and modular data management system and decommission the less useful system over time. It is not uncommon for a firm to have a main data management system and manage smaller subsets of data on a different system, so it is possible to run both systems concurrently.

Brownlee: Most data management initiatives have one of three major types of activities going on at any time.  There is either a focus on: (1) strategy, process, or governance; (2) technology; or (3) the data itself. If technology has already been the focus, I recommend data managers shift their focus to the data. Existing systems can often work and deliver much more value by simply adding a better source of data or replacing a less-reliable source. Focus on the data; it is why the system exists. 

Cole: It is a common problem, as our clients move on from first-generation solutions and cope with the changes brought about by merger and acquisition. SIX Telekurs maintains a partner programme, which means many technology vendors have interfaces to our data, so hopefully we can support whatever migration process a client chooses. Each of our locations has a team of senior people that have many years of experience in providing consultancy to customers who need more support in implementing projects. Our main data feed, the Valordata Feed, is produced in numerous versions, including standard options such as XML. Any new additions are made in such a way that no-one need upgrade their systems unless they want to benefit from any new features that absolutely require an underlying format change.

Sentance: With this kind of problem there is often some historic confusion between 'centralized data management' and what I would term 'consistent data management'. Access to one consistent set of data does not necessarily require that the data is stored within one centralized data warehouse. Implementing a centralized data management approach necessarily means other systems are retired, which is both costly and risky. I believe a hybrid approach is less risky, where existing data and system assets are exposed into a larger distributed data management architecture, either as a goal in itself or as a migration towards retiring legacy systems. While from a technical point of view distributed data management can be taken to mean many things, exposing existing legacy systems through some common layers (for example, web services) and use of technology such as distributed caching is likely to become prevalent. Within Xenomorph's architecture, the database side of our architecture is based around an add-in framework, so that both new and existing technologies can be exposed within the same system, appearing to the user as a single 'virtual' database.

Ventura: The core environments of many companies have been around since the mainframe era (the 1970s). Systems have been built around, over and in conjunction with those core capabilities. In a number of cases the stress placed on the environments has caused significant problems. The main issue is that many peripheral systems depend on data to be fed to them in a manner consistent with the existing capabilities. The expense of changing not only the core system, but the ancillary systems, is prohibitive. To mitigate the expense and to ensure capabilities are up-to-date we're seeing a move to services (service-oriented architecture) and widgets. The core data is maintained and embellished where needed; it is called by the consuming applications as a service, which is then applied to the respective function. As vendors expand their product suites and offer more in terms of services, we'll see the further incorporation of diversified data within legacy applications.

Following the financial crisis, the market has seen an increase in collaboration between different vendors. Is there enough collaboration in the reference data technology market? How can more collaboration help data consumers?

Batljan: Without going to concrete examples, it makes a lot of sense for technology vendors to create alliances with other service providers if they have compatible products. By combining complementary offerings, these vendors can create an 'off-the-shelf' product from the customer's perspective.

This approach insulates customers from the complexity of the implementation. Additionally, this approach emphasizes proven economic theory of comparative advantage, freeing customer resources to focus on their core competencies. Considering how compelling above reasons are and recent economic conditions, one would expect to see even more examples of this kind of collaborations.

Brachowski: Today's reference data vendors are looking to fill product gaps and are more open to working with other third parties to offer the market a more complete solution. Today's economy doesn't give us the luxury of time to develop and test new ideas, so there is always room for more collaboration. If the industry could create single standards such as code values and identifiers, consumers would have a much easier time reviewing data across the market data vendors.

Brownlee: Yes, I've seen great collaboration throughout the industry. At Kingland Systems, we have participated in collaboration with many data vendors as well as a significant collaboration with IBM to deliver financial services-optimized market data management solutions that we announced in October 2009. Innovative technologies, and practical data and industry expertise have contributed greatly to this collaboration and our 360 Data solution suite. 

Cole: Clearly, if you take a look at our history, Six Telekurs is a very collaborative company. We take part in industry working groups, co-chair the FISD, co-chair the MDPUG for ISO15022 development and chair an ISO working group subcommittee, we are a member of ANNA and we use as many ISO standards in our data presentation as possible. We are happy to collaborate on the development of standards for the industry we serve.

On the other hand, do our customers want us to collaborate too closely with our competitors? Would they really like a single product, or does the innovation at our company and at our competitors lead to an improved service for all of our customers? Can we do more? Of course we can always do more, but the reference data world is too big for a group of vendors alone to solve. The solution really needs regulation, bank and vendor collaboration. I wonder if there really is the appetite in the market to address such a wide-ranging problem, given the constrained resources that people work with now. We continue to support standards, and provide informed consultancy and support to help our customers get the best from the services they have subscribed to.

Sentance: Certainly the crisis has put momentum behind data management projects, and with the increased focus on risk this has led many vendors to re-examine their existing offerings against the requirements now demanded by the market. Xenomorph has been pleased to partner both with GoldenSource and Thomson Reuters in extending its data management offerings into the area of managing the more complex data sets and analytics required by risk management departments. On a broader basis, the contribution of vendors to such projects as the EDM Council Semantics Repository is interesting, as is trying to understand and influence the direction of data standards and new ideas such as the 'data utility' put forward by the European Central Bank. There is so much change in the market, and I think collaboration between data management vendors, data vendors and clients is necessary to ensure new rules and regulations are grounded in reality, not in the political need to impress voters or in the bureaucratic aim of gaining new powers.

Ventura: There is never enough collaboration to further the industry and it is true how a conflict will bring people together. There are also a good number of issues and problems that need to be solved. A number of facilitators are out there, such as the EDM Council and the FISD, however, the bulk of change will come from the commercial side. The facilitators are clearly articulating the problems, the vendors are the ones who need to solve them and recently have worked together to bring strong products to market. Every problem solved, or methodology agreed to, helps consumers. We all want to succeed and most do not want success at the expense of others. A new era has arrived, where working together doesn't necessarily mean buying your competition.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Waters Wrap: The tough climb for startups

Anthony speaks with two seasoned technologists to better understand why startups have such a tough time getting banks and asset managers to sign on the dotted line.

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here