November 2023

GreySpark Partners on automating operations on mission-critical data

An existing problem with post-trade data management systems in major financial institutions is the notable deficiencies in middle and back-office systems, even though these teams are the most critical to the operation of capital market firms in terms of performance and risk exposure.

For example, with reliance on a wide array of siloed technology systems for both data sourcing and workflow execution, operation workflows have become splintered, complex, and riddled with idiosyncrasies.

This is leading to poor results, such as pervasive duplication of effort, additional costs and, more importantly, increased risks of error. In particular, reconciliations can cause significant operational pain for firms, often forcing staff to spend most their day manipulating and checking data because its accuracy and reliability is poor, and use spreadsheets to plug any gaps.

In partnership with financial software company Duco, GreySpark found that an answer to this data problem could lie in an enterprise-wide data-centric operating model, characterised by a single data automation system for the entire post-trade landscape. This in itself could save firms time and money both at the point of use and by ensuring that data entering any workflow is clean, reliable and auditable.

Source: GreySpark analysis

As you can see, the data-centric (automation) model is leading the way in terms of data management technology strategies among middle and back-offices at financial institutions. But how and why is it so effective?

In fact, this week, GreySpark analyst Elliott Playle sat down with Meri Paterson, manager of GreySpark’s fintech advisory team, to briefly discuss the data-centric model in more detail.

Elliott: Hi Meri. Thank you for joining me today. So in terms of the benefits of the data-centric model, I’m guessing that some of the benefits are efficiencies, falling long-run costs, and interoperability?

Meri: Yes, so the idea, for background, in the report (with Duco) we were tasked to talk to Duco’s customers about what value they could get out of this model. We got these insights and then presented them back to Duco.

So you’ve got your data sources, millions of them and they spew data at you all day long. And when you’re in reconciliations you can approach the issue of how do you get your job done throughout the day, without mistakes and match everything.

You could approach it from developing a really good reconciliation tool, that automates and uses AI. Or, you could tackle the problem upstream, where the data is coming from and stop there from being so many data inputs to match in the first place with reconciliations.

Maybe at the source of the data you can figure out if there’s something you should enrich or format in some way, and this same data is going to the same teams — let’s format it up here in a way that’s going to benefit all those teams, instead of all those teams having to format it individually, so yes it’s an efficiency thing and trying not to duplicate workflows, which is a real thing.

There will be so many people doing the exact same job unaware of each other, so in theory it would be a good idea to reduce that.

Elliott: Yeah sure. Another thing, I guess if you’re changing your infrastructure to accommodate the new data-centric model, then there’s the possibility of disruption?

Meri: There would have to be some kind of parallel implementation to get two systems working at the same time before the switchover. With something like reconciliations, it’s crucial that they’re done every day.

Elliott: And I guess from a regulatory standpoint, you need to have a record of those reconciliations?

Meri: One thing actually, that Duco did speak about in terms of benefits, now that you mention regulation, is that the big use case is for regulatory reporting of the reconciliations.

The model uses data automation, which checks data before it goes into regulatory reports, ensuring accuracy and “preps” the data because reports have to be in a certain format and not necessarily in the same format used in internal processes.

In regulatory reporting, there’s so much that has to be produced on a daily basis, and that ties up a lot of people. If you can remove the different teams having to reconcile, that’s potentially quite impactful.

And the other important thing is risk. If you can get data at its source, and prevent bad data from flowing downstream, it has a big risk-reduction impact because there are fewer mistakes, fewer fines from regulators and so on.

Collectively, there is a strong argument for financial institutions adopting a data-centric model. The model goes a long way in achieving central control of data, thus removing the presence of siloed, legacy and often error-prone workflows.

This article was originally published on the GreySpark Substack.