October 2023

Sibos highlights part 1: Securrency and Mizuho talk data automation barriers in capital markets

By Rewan Tremethick, Content Manager.

Sibos 2023 was packed full of great conversations, expert insights and – at the Duco booth – dinosaurs.

Our CEO Christian Nentwich moderated a discussion between Nadine Chakar, CEO of Securrency, and Ken Utsunomiya, Global Head of Operations at Mizuho, on the blockers to data automation and the paths around them.

We’ve been working with both of them for several years now and were excited for them to share their insights with the Sibos audience. Sure enough, it was an enlightening conversation – so much so that we’re splitting the highlights into two articles.

This article focuses on the current challenges and barriers to automation. The second part will explore how to overcome those, and what happens when you do.

So, whether you missed the live session or want a reminder of the great stuff that was shared, you’ll find a roundup of the top discussion points below.

Manual crisis: Has capital markets failed to automate?

Christian set the scene for the discussion with a story about a large derivatives broker he once visited. The company had a system they called ‘paint and pay’ that they used to process commission claims. They would print out all their trades for the month and all the claims from clients and pin them all to the wall. Staff would use highlighter pens to mark off matching claims and trades; these would be given to the payments team to process.

Remarkably, this was happening in 2015 – less than a decade ago. And it’s not something that’s been solved in the meantime; one big bank at Sibos told Christian they have 8,000 manual processes. Nadine gives an example of a time in one of her previous roles where they ran out of tabs in Excel for just one process!

So why is the industry still like this?

Nadine and Ken think it’s largely down to mindset.

“There’s a cultural element here,” Nadine explained.  “We’ve never taken data seriously, never looked at it as an asset. It’s not a technology issue, it’s more of a cultural shift in the way that we look at data. We look at data as just something that’s there. We only realise that it’s a problem when the report that we just printed to send to a client or send that report to a regulator is wrong. Then it’s usually: start blaming the technology that’s there.”

Geography plays a role as well. Ken described how Japan is a very paper-based culture, where manual work is often the default way of solving problems.

“One of the big changes for us culture wise was Covid. Many Japanese firms want to keep using paper. A lot of paperwork. Also we have a process called Inkan – this is stamped with a physical stamp. During Covid, many people had to go out just to put the stamp on. It was quite dangerous, right? So therefore the government decommissioned all stamp processes.”

That change, Ken said, helped to foster a shift in mindset in the financial services industry in Japan as well.

“Once government changed, we brought this to our industry. ‘Let’s try not to have any more stamps.’ Then, ‘Let’s try not to print paper.’ Then the automation processing mindset was started, but culture-wise there’s always a mindset to print. Two-eyes checks, sometimes four-eyes, sometimes six-eyes, is comfortable.”

Organisations have lost sight of the bigger picture

Capital markets firms are very siloed now, our panellists say. As Christian explains, “one of the problems you get is that people get a very narrow view. The margin person knows the margin process, the custody person knows the custody process; everybody’s looking through a very narrow lens. The opportunity to see the process end-to-end is lost.”

Nadine explains that this is a result of the way firms have scaled over time.

“When we grew up in the industry we used to do everything from beginning to end, so you got a chance to understand how the lifecycle works. And as the industry grew, our idea of scaling was to functionalise quite a bit. So today people understand a very small part of the process; it’s hard for them to stitch it.”

But a change in thinking can have a big impact here, Christian believes.

“One of the advantages of taking more of a data centric view is that data is a natural silo breaker. Data has a lifecycle that cuts across many, many different things.”

The price of bad data is there, if you look for it

Why is it so important for organisations to rediscover that top-down view of their data? Because low quality and errors are costing them.

“That’s the unfortunate thing that people still don’t realise,” Nadine explained. “There is a price for bad data, and it trickles out throughout the whole organisation. There is an indirect penalty that you’re paying because every single time that report is out, or it’s wrong or it’s late, or you’re irritating a regulator or a client, you’re paying that tax right there. So there is a cost to this.”

She points to regulatory fines as a great, tangible way to see the cost of bad data, or the example of providing an incorrect Net Asset Value to a transfer agent, the consequences of which will cost you ‘big bucks’ to fix. But, she says, the industry is beginning to realise this.

“I think we’re all waking up to the fact that it’s not your problem, it’s my problem: I’ve got to fix it, I’ve got to own it. Because at the end of the day I’m facing the regulator, I’m facing my client, I’ve got to explain to them why the NAV’s a month late or the report is wrong, or my cash flow projection isn’t correct. I can’t blame it on ‘He’s my Chief Data Officer and he didn’t show up to work today’. It’s my responsibility.”

Both Nadine and Ken stressed the fact that data quality goes beyond just the remit of the CDO. You need to involve the business users. They are the ones who understand the requirements and have the insight into the data.

“We have people like a Chief Data Officer who always work on data definitions that we’re going to have,” Ken said. “But in our case, we have a business domain model where we are setting our data governance with this type of forum authority. It is quite important for us to have a business user involved in the data definition to understand what is the purpose.”

Nadine added: “I agree with Ken. That infrastructure and the governance is really important. I think the key is: how far down in the organisation can you drive it? How do you become personally accountable for that?

What are the barriers to fixing bad data with automation?

It’s not just enough to remove manual work. That alone won’t fix the problem of bad data – in fact, it often makes it worse. As Christian explained in his opening remarks, historically the approach was often to build another data lake, but then you just end up with all the same problems, plus another data lake.

As Nadine said, “If you automate the crap you do now, you just end up with crap but with more sophisticated technology. It doesn’t solve the data problem. Operations exists to make up for the shortfalls in technology. Even as technology has evolved, everyone’s retained a very complex way of processing things to make up for certain problems.”

“A lot of transformation projects end up in tears because they just try to automate the inefficiencies that already exist,” she added.

Which leads nicely onto the topic of Robotic Process Automation (RPA). RPA can be used to automate repetitive actions taken by users, such as scraping or entering data.

“RPA is always happening, but that just replicates what people are doing, without cleaning up,” Ken explained. “Having an RPA process, asking IT and many people to do testing, checking, whatever, takes quite a long time. Therefore, in Japan, as a natural way of thinking – why not hire a temp staff, rather than doing such an annoying process? I don’t want to spend lots of time doing manual work to clean up a process before doing RPA and asking IT to do some build.”

Ken also said that, historically, there was a tendency to try not to see the issue and instead to just keep hiring temp staff and solving problems manually. Nadine dubbed this ‘Human RPA’.

So, automation on its own can’t make up for the shortfalls in bad data. Especially, as Christian pointed out, given that some of the complexities that need solving aren’t to do with data processes themselves, but the way organisations handle change.

“In one particular firm, the mandatory format for the business requirements document for process automation – say, a reconciliation process or some sort of data prep processes – was 33 pages,” he recalled. “And it’s agreed with the auditor, so then you have another barrier to change, because you can’t change the format because the auditor signs off on it. So one of the big KPIs we had was ‘How many pages can we reduce this document to?’ And we didn’t manage to eliminate it, we got it down to eight pages, from 33 pages. But at least they didn’t have to write down the rules of what they do anymore, it was more sort of ‘Why are we doing this?’”


Capital markets is a complex industry, full of complex products, companies and regulatory requirements. It’s only natural that its operations were built on complexity as well. But that has been compounded over time and firms are paying the price.

What’s needed is a mindset change. It’s time to put data first, share responsibility for its upkeep across the organisation and look at the root causes of issues rather than resourcing for more manual work. Only then will firms be able to stop paying the bad data tax, break down silos and deliver their transformation agendas.

In the next article we’ll hear from Nadine and Ken on how to actually do this – and how it benefits everyone from firms to operations teams to do so.

You can also check out the full recording of the session below.