December 2019

How to De-Risk Your Risk Data (with Self-Service)

By Keith Man, Head of Asia Pacific.

Maintaining risk data integrity in the financial markets has been a notoriously difficult problem for financial institutions to solve. 

Risk data is critical for financial institutions to monitor the financial health of their institutions and for regulators to monitor the market in general, and determine how exposed firms are to each other and to market stresses. 

During the financial crisis, regulators found that banks struggled to provide accurate and complete aggregated risk data that could quickly demonstrate their financial health. The regulators needed to know how liquid the banks were, how exposed they were, and how much market stress they could handle. To derive this information, banks would pull data together in vastly different formats and in high volumes, which compounded the difficulty in providing complete and accurate data.

Since the financial crisis, regulators have been demanding complete and accurate risk data, which means reconciling it back to source systems. In 2016, the Basel Committee on Banking Supervision issued standard 239 (also known as BCBS 239), which mandated the principles for effective risk data aggregation and risk reporting. This included risk data reconciliation and validation. 

To date, most global and domestic systemically important banks are still struggling to comply with these principles. 

So why is it so difficult?

  1. The data is complex

To calculate credit, market, liquidity and operational risk metrics, the underlying data is typically very wide and very long. Institutions may also have differing views on the calculations of these metrics. These views can change over time and as regulators evolve their thinking.

  1. The regulation is complex

Risk practitioners have to navigate a maze of constantly changing global and national regulations that impact risk metric and capital calculation data.

  1. The technology is inflexible

Risk managers and risk platforms are net receivers of data. This means they do not control the platforms that send data to them to run risk calculations. They will receive data from a very broad number of systems that are built on different databases, architectures and schemas.

Historically, data integrity and reconciliation tools have been based on fixed schemas to match data. This means that data needs to be pre-formatted and normalised (via ETL) before being sent to a data integrity or reconciliation tool. Fixed schema matching struggles to keep up in the risk space as it does not give firms any flexibility to make changes as regulation and interpretations change. It also struggles with very wide data sets. Every change usually results in a costly and time-consuming IT project.


Old problem. New technology.

At Duco, we believe the only way round this problem is to empower the experts. Risk practitioners live in a data world that is complex, voluminous, and they have limited control over the formats. They also are in charge of critical systems that perform important computations that should not be changed without very careful consideration. 

As long as experts such as risk practitioners are relying on code or time-consuming projects to deal with data challenges, they will forever be playing catch up. However, by using the Duco platform, users can rapidly build integrity controls themselves under a robust user governance structure and therefore comply with new regulatory interpretations quickly.

The amount of data flooding into risk departments and firms in general is only going to increase in terms of volume and complexity. It’s clear that legacy, IT-heavy solutions cannot deal with the data today, let alone the data of the future. Only by using the right tools will firms be able to survive and thrive in this new environment.

October 2024

By James Maxfield, Chief Product Officer. With ESMA recently reiterating the importance of harmonisation of shortening settlement cycles across Europe, [...]