September 2022

Five ways financial data is scuppering your automation efforts

Data management causes financial organisations serious stress.

44% of senior professionals in financial services firms find the amount and complexity of data they have to handle as a business “unmanageable with their current systems and processes.”

If that statistic worries you, spare a thought for the 41% who said they had actually lost sleep over data reconciliation.

Of course, financial firms’ handling of data is of the utmost importance, whether that’s trade reporting, collateral management, nostro accounts or anything in between.

For example, parties to a trade must carry out reconciliations to confirm the economics before it can move to clearing and settlement.

Or consider the need to quickly and correctly reconcile multiple and growing data points to meet financial regulations such as BCBS 239, Sarbanes-Oxley, Basel III, and numerous others — or risk heavy fines.

Data errors and omissions can lead to inefficient and unagile operations, slower time-to-market, financial loss and damage to a firm’s reputation. Contrarily, handling data more quickly and accurately makes a company leaner, more efficient, more transparent and more competitive.

The prevalence of manual processes in financial services

In theory, automating business operations for mission-critical data (which means consuming, checking, processing, reporting and publishing data without manual work) can solve these problems, delivering the benefits outlined above, as well as eliminating human error.

While most data management processes are automated, the “last mile” has proven frustratingly difficult due to the enormous range of systems, data formats, and processes used.

Today, hundreds of thousands of people in capital markets firms still input, transform, reconcile and enrich data manually, often on spreadsheets. It’s reminiscent of the ranks of accountants in pre-PC offices.

They carry out tedious work that adds nothing of value to the business. This is bad for your team and your firm. Automation frees your people up from mindless, meaningless tasks and allows them to focus on doing something important. That’s a win for everyone.

Is automation really impossible?

Why is it that 44% of data professionals think “the different types of data they have to deal with as a company makes it difficult to reconcile anything without manual processes”?

Let’s look more closely at why automating end-to-end management of financial data has proven so difficult.

The data automation challenge in financial services can be broken down into five key areas: variety, change, scale, lifecycle and control.

A near endless variety of financial data types and formats

Organisations must handle near infinite categories of data. Investment managers wrangle transaction, client, product, security master, pricing and market data. Different trading systems, pricing sources and market data providers will present data formatted in diverse ways.

Newly created initiatives and instruments give rise to new forms of data on a near daily basis that you need to be able to deal with. It is impossible to draw a boundary line around all the data your firm will need to manipulate at any particular point in time.

The traditional response to all this variety of data? Procure or build a variety of solutions, each optimised for a particular format or use case.

However, selecting or building and implementing multiple point solutions requires time and investment. Each is hard-coded to accept data in a specific schema, meaning expensive and slow extract-transform-load (ETL) tools are required when moving data from one system to another.

A common worst-case scenario is that the resulting solutions are user unfriendly. Simple actions, such as marking a reconciliation as failed often require the IT team to write a script. It’s hardly surprising that data handlers resort to easier-seeming spreadsheets, despite the increased risk of errors.

The relentless pace of change

On the other hand, the forms of data can stay the same but data points might need to change.

Corporate actions, for example, can force data point alterations through changes to pricing, securities identifiers, dividend amounts, and company industry data. Delays in updates and other system errors can make data outdated. The expired data can lead to breakdowns when the team tries to automatically reconcile it with up-to-date records.

Change also comes about as regulators progressively tighten standards. For instance, the Commodity Futures Trading Commission (CFTC) rules rewrite goes live on 5 December 2022, incorporating 78 changes, from updating old data fields to creating new ones.

The rewrite also adds Unique Transaction Identifiers (UTIs) and Unique Product Identifiers (UPIs). There are new and updated validation rules, as well as revised reconciliation, reporting and error notification requirements.

Similarly, the European Market Infrastructure Regulation (EMIR) Refit will finish a stepwise implementation in Q4 2023 that introduces 80 new reportable fields, taking the total number up to more than 200.

Scaling makes everything harder

The volume of data flowing through the financial system is highly dynamic. Market volatility, such as a spike in trades, creates vastly more data. This can result in bottlenecks for capital markets firms.

For example, when Covid-19 hit in Q1 2020, global equities trades rocketed 37.81% year-on-year to 8.6 billion in number. As a result, contracts allocated after the trade date rose by a factor of 15 during the period. This exposed operational vulnerabilities for many firms.

These are particularly acute when the back office has been right-sized for business-as-usual. Managers can’t onboard new team members quickly enough, nor shed the extra resource when the rush is over.

As a result, a “good enough” team will have to work overtime to try to get through a backlog during periods of market volatility.

Keeping track of your data throughout its lifecycle

Data often moves through many systems in an organisation during its lifecycle. Along the way, it can get transformed, manipulated and enriched by multiple teams in various departments who need to work on the same dataset for different purposes, such as reporting or risk modelling. This means many different versions of the data can exist in systems across the middle and back offices. Firms often struggle to match and reconcile these thousands of data points.

Spreadsheets create another problem here. Often the people who created the macros have moved on from the company, taking the knowledge of how it works with them and leaving no documentation. No one is left who clearly understands what is happening to the data.

This makes it incredibly difficult for you to accurately keep track of data moving through your systems architecture. This is a vital requirement for financial services firms, meaning many of the brilliant and highly-useful data manipulation tools on the market simply don’t work for this industry.

Ever more technology is being created to handle the amount and complexity of data, so an already challenging problem will only become more acute.

And end-to-end automation seems further away than ever.

Enabling agility – without compromising on control

Firms aspire to efficiency and agility so they can remain competitive and take advantage of market opportunities and software innovation. However, this fundamental business driver puts pressure on the traditional approach to data governance.

According to this model, the operations or IT department signs off on a requirements document created by the business. This details how to carry out certain tasks like data normalisation and is implemented, tested and deployed.

While this helps companies in their important quest to control their data and reduce business and regulatory risk, it is cumbersome and limits time-to-market. This creates bottlenecks as the company tries to innovate — and the go-to solution is often our old friend, the spreadsheet. Working in these increases the peril of errors or omissions. In other words, financial services firms may be increasing, not mitigating, risk by clinging on to their standard three-line controls.

Many firms are looking for a new, hybrid operating model where operations and IT work more closely together. Technology such as no-code tools give end-users more power and frees up development resources. These solutions allow data experts to rapidly build controls to aggregate, cleanse and standardise data on the fly.

However, that worries data professionals because they don’t feel they have enough control over the processes that are being created to manipulate data.

Solving these challenges requires a different approach

These five data challenges – variety, change, scale, lifecycle and control – have long plagued financial services. Technology implemented to address these problems have often made them worse.

But new technology makes it possible to tackle these problems in a different way. No-code, natural language programming, machine learning and cloud computing are changing how firms can consolidate, standardise and reconcile their data.

Visit the Duco Platform page to see how new technology transforms how you think about data automation.