January 2024

Data-centricity: the operating model for T+1 and beyond

By James Maxfield, Head of Product and Solutions

We’ve previously discussed the traditional operating model and how it should be adapted to meet the challenges of shortening settlement cycles. How firms can discover exceptions quicker and a new approach to batch processing. In this article, we explore the many benefits of rethinking your entire operating model.

The move to T+1 settlement cycles is an opportunity to affect meaningful change in your organisation. Some firms may have to deploy a more tactical fix, but others are looking to secure long-term benefits. T+1 projects are being funded by the change-the-bank budget at several firms we’ve spoken to. And if transformation is the goal, then nowhere is it more impactful than if you re-examine your operating model.

Let’s explore why T+1 offers a prime opportunity to embrace a data-centric operating model, and what happens when you do.

The next stage in operating model evolution

Financial institutions were top line driven prior to the financial crisis. The focus changed as the industry emerged from the turbulence of 2007-2008 and tighter regulations came into force. Capital efficiency became top priority, with risk allocation unsurprisingly given greater prominence.

Firms started to really think about their bottom lines as well from the mid-2010s. This grew into a drive for operational efficiency, which continues today and has laid the foundations for another shift: towards data-driven decision making. Why? Because bad data is the cause of a significant amount of technology and personnel cost, as well as a source of operational risk.

The old way of managing data exceptions caused by a convoluted technology landscape and the growth, complexity and volume of data is increasingly unsuitable. Instead, the trend is towards thinking holistically about the business and making it more efficient across the board.

And when it comes to efficiency, fixing issues with data at source can deliver significant savings and efficiency gains.

How does a data-centric operating model work?

Data-centricity, as the name suggests, means switching focus to data, as a process enabler, instead of exceptions, which are process disruptors.

Exceptions, after all, are just symptoms of data problems upstream. Yet instead of fixing those issues in the first place, most firms continue to resource for cleaning up the mess those problems cause at the very end of the data lifecycle. A significant portion of operations workers perform the role of ‘human APIs’, manually moving, processing and checking data. Even when systems are involved, these are often point solutions that add further complexity and expense.

A data-centric approach would be to consider the data requirements of your process. For example, you’ll need information such as allocation details, settlement instructions, settlement location and reference data. Consider where the necessary data comes from, how trustworthy it is, whether it needs matching, transforming, validating, reconciling – and so on. This way you spot potential blockers and take steps to mitigate them before they happen.

This is a proactive way of thinking about data, as opposed to reactive exceptions management; you have to wait until you have discovered your exceptions before you can do anything about them.

A data-driven approach in action

Let’s look at a specific example to demonstrate how this kind of operating model works in practice.

Incorrect standard settlement instructions (SSIs) are a common cause of trade failures. It’s possible to deal with these exceptions in a T+2 world, but it’s much more challenging under a T+1 timeframe.

A data-centric approach would be to put upstream processes in place to verify the accuracy of SSIs with your counterparties. This could be a weekly check against ALERT or an internal data store at each counterparty. This ensures the data is accurate and removes a major source of downstream process errors.

It’s worth noting that this isn’t just a case of taking the same data quality controls and putting them further upstream. A single piece of bad data can cause multiple breaks, especially when used by different teams and functions across your organisation. Fixing each data issue at source can prevent multiple breaks and creates trust in your data (more on this later).

SSIs aren’t the only potential cause of trade failure. If you address all of the predictable causes in the same way you’ll greatly enhance straight-through-processing. It also removes the need for the resource-intensive exceptions-management process. There will still be exceptions (trade booking errors, for instance, are unavoidable), but they will be far fewer in number. Where possible, you’ll be able to identify new data issues as a result and fix those too.

And here’s another great example from collateral operations that one Duco customer has put into practice. They still run tri-party custody recs, cash and securities settlement recs and collateral subledger to general ledger reconciliations. But they are proactively addressing the cause of potential issues in these checks by implementing data quality controls upstream. They perform checks on their legal data and their transaction data at regular intervals and validate their margin calls on T0. All this means fewer problems downstream.

This process isn’t new for firms. Asset managers onboarding a new fund, for instance, will already consider the operational needs and impact of doing so. But the traditional approach is to think about the reconciliation needs – and therefore exceptions – and how the team will manage those.

A data-centric approach still considers those operational needs, but from a different perspective: one where the resource and risk-heavy manual work can be mitigated by upstream processes.

The benefits of a data-centric operating model

Data-centricity applies to much more than post-trade. T+1 simply provides the opportunity to evolve the operating model and achieve efficiency, agility and trust in the long-term.

You’ll find many benefits once you focus on your data.

Enhanced STP

Firms often anticipate a certain percentage of straight-through-processing errors and resource for these. But as trading volumes grow and system and process complexity increases over time, that small fraction becomes a significant number of breaks every day. Some institutions have hundreds of thousands to deal with.

A data-centric approach for T+1 targets the cause of automation failure – bad data – to further enhance post-trade STP. Exceptions become rare indicators of new data issues to address, not a fact of life for operations teams.

Lower costs

You may have hundreds, thousands, or even tens of thousands of staff in the middle and back offices just working on cleaning up after poor data. These teams could instead be working on activities that add value for the business. Or, to put that a different way, your talent could be working on meaningful tasks. Turnover for staff working these manual processing jobs can be very high.

Data-centricity goes hand-in-hand with the latest in data automation technology, so you’ll also make significant savings on legacy on-premise systems. This operating model works across your organisation, so you’ll need a cloud-based, SaaS platform.

Only this technology can provide the scalability and accessibility that high-volume, global data operations require. A key benefit of data automation is that it enables you to consolidate your processes onto a single flexible platform, which again removes the need for a complex web of expensive point solutions.

Reduced risk

Inaccurate data, and the complex and opaque processes used to try and fix it, introduce operational risk to your business. A data-centric operating model, enabled by a data automation platform, removes the manual effort in managing data and standardises processes. You can easily access accurate, timely data to monitor and report on risk.

Data you can trust

Fixing data at source ensures that everyone in the organisation has access to clean, trusted data. This removes the need for multiple teams across functions to separately transform, reconcile and validate data independently. It also improves transparency, reporting and compliance as there are fewer processes and systems involved in the data lifecycle to understand.

It all starts with your data

Duco customers are increasingly moving towards data centricity as they seek to unlock greater operational oversight and the ability to fix errors at source. T+1 is a good proving ground for this data-centric way of thinking.

Once you’ve seen the impact it can have in this area, the next step is to expand to other areas of your business until your entire enterprise is data driven. You can then achieve the benefits outlined above at massive scale.

Find out more about how Duco can help you meet the challenges of T+1 and move towards a data centric operating model.