17 October 2025

Agents of change: How agentic AI is reshaping Operations in financial services

Agentic AI was a ubiquitous topic at Sibos 2025. There’s so much information flying around about this complex and transformative technology. We wanted to help firms get a clear sense of how to deploy agents for maximum efficiency in their Operations, as well as highlight some of the key considerations they should keep in mind while doing so.

Our Meet the Experts session brought together a panel of AI and financial services leaders to discuss the impact, value and best practices of agentic AI. Joining our Chief Product Officer James Maxfield on stage were:

  • Richard Anton, Chief Client Officer at CIBC Mellon. Anton oversees all client relationships and the product organisation. He was previously Chief Operations Officer, so he “knows how the engine of the organisation works”.
  • Ruben Falk, Head of Agentic and Generative AI, Capital Markets at AWS. Falk is part of a team of industry specialists that help connect business problems with Amazon Web Services’s technology solutions. He focuses on business value and solving business problems for customers.
  • Kieran Mullaley, Capital Markets Practice Lead at Capgemini. Mullaley is involved across business consulting and technology implementation, for services including AI and managed services from a business and IT perspective.
  • Nkateko Mabunda, Chief Digital Officer at Nedbank. Mabunda oversees the digitalisation efforts across the corporate investment bank to drive business optimisation.

Read on for the highlights of their conversation, or jump straight to a section using the links below.

Agentic AI is here

Firms have transitioned from testing agentic AI to deploying it. While a lot of this is concentrated in the front office, agentic solutions are proliferating downstream as well.

“We’ve seen it from three different components,” Anton said. “One is around how we're leveraging AI to deliver more data insights. It now tells the story immediately as opposed to having an analyst trying to interpret the data and figure it out.”

The second use case, Anton said, was personification. “How do you create a different engagement and really customise that interconnectivity with customers?”

The final key area was connecting front-to-back across the business, deploying agents “across that continuum”.

While CIBC Mellon have largely started with the front office, Nedbank have taken the opposite approach, Mabunda said.

“The focus for us has been in service and in Operations. More and more we're getting pressure to move towards the front of our business, which I think is the right thing. Moving on from these hero PoCs [proof of concepts] that we're running to actually embedding it into our channels and our engagements with our clients.”

He also agreed with a point Anton raised earlier that governance and risk must be the number one priority when deploying agentic AI. “[AI] is coupled with a strong focus on how we manage the risk,” Mabunda explained. “And really leaning into ethics. In our country's perspective, ethics and the outputs of our models have real impact in terms of socioeconomics. We have to pay attention to that.”

Mullaley also sees a mixture when it comes to deploying agentic AI. In the front office he’s seen firms using it in several ways, including:

  • Combining structured and unstructured data to better provide research and communicate it in an attractive way to clients
  • To support trading decisions
  • To support portfolio management and rebalancing
  • Tailoring interactions with clients in terms of services, advice and products

“But equally we're seeing a lot on the productivity side, the mid-back office automation space,” he added. “And the whole compliance and risk side; KYC [know your customer] agents or multi-agent support for financial crime processes, fraud monitoring, regulatory reporting. So, a lot in the compliance space, a lot in the operational space as well as the front office.”

Falk said that the use cases for agentic AI are “reasonably well understood”. That’s a big shift from the recent past, where figuring out what to use AI for posed a big challenge for firms.

What’s changed now, Falk explained, is that both vendors and firms are working out how to deliver AI that meets requirements of speed (latency), accuracy and cost - three factors that can negatively impact each other.

“Getting from PoC to production is not always easy. How do you get from something that has 75% accuracy, high latency and perhaps not the best cost-efficiency, to something that can be deployed with the right latency profile, 95% accuracy and a cost-efficiency to match?”

“Now, both we and our customers understand how to do it. Our customers have a better understanding of how to choose the right use cases with the right ROI [return on investment].”

Governance frameworks for agentic AI in Operations

Next, the panel dived deeper into the topic of governance. There’s a lot of fear over what happens if an agent goes rogue. But autonomous automation is not new within capital markets. Algorithmic trading has been around for 20 years and the industry has become comfortable with having the right controls around that.

So how were the panel mitigating risk and engaging with auditors or regulators around these issues?

“If you bring it down to how we're operating currently, it's not significantly different,” Anton said. “We are looking at adopting the same governance and control framework that we have across our organisation. It's very similar to having a rogue employee.”

“Do you have the right control framework? Do you have the right level of oversight? Importantly, do you have the right level of human engagement into that process? Having those control mechanisms throughout is important. Are we auditing across the board?”

An important aspect of control around any kind of automation is having some kind of ‘kill switch’ that disables the tool if something goes wrong, Anton said.

“The other element that you have to be conscious of is what kind of information are you exposing? And is it creating any confidentiality breaches? That's been the big one that regulators are really interested in: how much is this being contained and how much oversight is still being managed?”

Regulation is an evolving area, Mullaley added, with regulators taking different approaches. But existing regulatory frameworks provide a starting point.

“A lot of the existing regulations that are out there apply to AI in some cases as well. MiFID, for example. Europe has perhaps led the way with the AI Act and actually putting some formal standards out there and different tiers of requirements depending on the type and the risk of the model. UK [is] more principles-based, pro-innovation - putting sandboxes out there, AI labs trying to encourage development of AI. Different approaches across Asia and the US as well.”

On top of that, he said, are questions that firms need to be able to answer. How do you identify the agent? How do you get a log of its actions and trace issues when something goes wrong? 

“The approach we've taken is a little bit similar to how we do design on our digital channels,” said Mabunda. “We have a strong playbook that guides our designers in terms of design that delights our clients. And I think we're trying to apply the same in terms of model building. What are the principles? How do you bring in considerations of risk right up front? And I think it is a growing book. We are trying to step into it in similar ways that we've applied other principles and we're seeing it work pretty well in our space.”

“A lot of the existing regulations still apply,” Falk said. “It's a matter of how you evolve your risk management processes to take into account generative AI and agentic AI. With traditional machine learning, the data scientist would go to the risk management committee and you would have all the inputs, you would demonstrate the outputs, you demonstrate the lack of bias and how you tested that.”

“With generative AI, you're going in with a model that's not yours. It's Anthropic's or OpenAI's or whoever. The way that you can represent to your internal risk management committee as well as to regulators that you have this under control is very much by having guardrails, and you have to actually test that the output is coming from your own data.”

Another challenge, he added is that “if you have these multi-agent processes then the number of permutations that you can potentially test for are now an order of magnitude greater than what you had to deal with in the past.”

Agentic AI: When to build, when to buy

The marketplace for innovation has shifted to build and buy, instead of an either/or approach. This was certainly the view on the floor at Sibos. Organisations are looking to blend innovative partners with tools they're designing internally. One theme we're seeing is desire from institutions to control their internal agentic ecosystem, rather than opening up to third parties.

Interoperability clearly becomes important there. How does Nedbank approach assessing the right partner and the right tool for the job?

“In our business, it's an evolving perspective,” Mabunda said. “Over the last decade, the focus in our business has been to build ourselves. Not just in the agentic space, but across the myriad of technology there's been, the culture has been to build rather than to buy. But in the last couple of years I've seen a good maturity to be a little bit more open.”

“[It’s] driven by the pressures in the market, especially in our space to capture the client. Whoever gets to engage with the client in this type of way is going to lead the space. So it's given us as a business quite a lot of pressure to shift our approach so we can innovate faster with scale.”

Buying is still an important part of the strategy. But what you need from a vendor is changing.

“There's lots of things that go into how we select a vendor partner,” Mabunda continued, explaining that Nedbank looks for partners that are equipped not just to meet their current needs, but also to help them drive their strategy forwards.

He shared that some of the factors Nedbank takes into consideration are:

  • Current capabilities of the vendor
  • Their potential to innovate alongside Nedbank
  • How connected the vendors are with other innovators
  • The cultural fit - are they aligned with Nedbank’s strategic direction?

“We do push hard to find partners who collaborate closely with us. And when I say collaborate, I mean in terms of managing risk, helping us figure out the governance around what it is that we're doing, sharing IP and knowledge. So not just building capability in their space but helping us build the knowledge in our space as well.”

Anton shared that he has ‘four Cs’ when it comes to selecting vendors:

  • Cultural fit - Does how the vendor’s thinking about our business align with our own?
  • Capability - Does the vendor have the right features, not only to meet today’s challenges, but that will enable our strategic direction in the future?
  • Complementary - Will the vendor support our strategy in terms of where we want to build tooling internally?
  • Commercial - Is buying technology commercially viable? What’s the ROI?

“We built and modernised our applications to make it open architecture,” he explained. “That allows us to continue to plug in various partners - the introduction of Duco into our entire workflow models, both horizontally and vertically in our operating environments, has been very positive.”

'Why not just use ChatGPT?'

One of the common questions that we bump into quite a lot at Duco is ‘Why can't we just use ChatGPT?’

Firms have seen the impact that large language models (LLMs) have had in lots of areas, so wonder why they need a specialist partner when they could just deploy something like ChatGPT across their organisation.

“Tools like ChatGPT, Copilot, and most general purpose AI models like that, are actually a lot of benefit,” Mullaley said. “In terms of getting people more familiar with the use of AI, also in terms of individual productivity.”

“They're good for research, they're good for drafting documents, but if you try and use that sort of solution for domain-specific applications, if you try and do a query of regulatory rules, you're probably not going to get back the results that you really want.” 

“That tends to lead people onto the next stage of solution, maybe a RAG [retrieval augmented generation] tool where you can provide a more specific tailored dataset and prompt engineering, get a more tailored, specific response.”

“If people are starting to look for how you get the process automation, the productivity benefits, then the other tools by nature are more reactive. People are starting to look at how you get a more proactive model, where something can take in data, make a reasoned decision, start to act on it.”

The ChatGPT question is part of a larger issue for firms: getting the right tool for the job. We’ve discussed before how firms are sometimes jumping to an AI solution when other automation tools can do the job more efficiently. But even when the situation calls for AI, you still have options to choose between.

“There's three categories of AI solutions,” Falk added. These are:

  1. AI baked into a vendor product, such as the Agentic Rule Builder capability that Duco offers
  2. Consumer products like ChatGPT
  3. Enterprise solutions where you use your own proprietary data

When it comes to those enterprise solutions, Falk explained, “You have licensed data that isn't available to ChatGPT and the like. And you want to be able to check that the answers are actually coming from data that you own or you control and you understand because, otherwise, it's very hard to get the compliance in place that we talked about earlier.”

“What we're seeing is that our customers want to have their own business processes reflected in the agentic execution plan. Bridgewater, for instance, they've been very public about using agentic AI for investment research, and they have their own investment process for macroeconomic analysis, and they want the agentic system to reflect their investment process.”

“So, they specify the execution path themselves, or they specify a bunch of different ones, and the first thing that happens is that the question gets matched to an execution plan, and then that execution plan invokes the agents in the order that it needs to. And then it comes up with an answer that's based on Bridgewater's philosophy to investment research.”

That, Falk said, is very different from asking ChatGPT to come up with an investment process. It’s “certainly not what you can represent to your investors”.

Unlocking the ROI of agents

Both vendors and financial institutions are evolving the way that they think about the value of agentic AI. Some vendors have started offering ‘jobscription’ pricing models, as opposed to the standard software-as-a-service (SaaS) model, where agents are sold to perform a particular role in the way that a human would.

How did our panelists feel about the topic of ROI in an agentic world?

“The ROI discussion with regards to AI has been an interesting dialogue,” Anton said, admitting that, when it came to reports on how little value firms are seeing, “My first reaction when I thought about it was shock and dismay, really. I didn't understand how people weren't tying the two together.”

“There's an element where you can really target opportunities to eliminate roles, especially in KYC and specific functions and tasks, versus looking at it on a holistic basis. And we've been looking at it more holistic, and now we're starting to do targeted approaches across our business lines.”

“Our goal is understanding the workflows,” Anton continued. “And, where we can deploy across them, the ability to have those agents start connecting with one another."

"But where do we keep that human element within that loop? It starts to change the roles and responsibilities of individuals, from the processing agents to overseers, and that's where that governance and risk framework comes into play.”

Mabunda added, “Thinking about agentic, you can tie it quite specifically to business outcomes in terms of what these models are supposed to generate. And because of that, commercial models can start being a little bit more flexible. To date, I haven't seen vendors who are willing to step into it in that way, but I think it will be interesting as we evolve.”

He also said that Nedbank’s view on the ROI of agentic has changed, because it’s not always easy to prove and it can slow down adoption. “Efficiency doesn't always translate into cost savings,” he pointed out. “But it can help us generate scale as we move capability out into other markets.”

Mabunda said that Nedbank’s primary focus is on compliance capabilities as they expand into greater Africa, while another hard-to-measure focus is on employee experience.

“You can't always tie that back into financial ROI. Retention is quite high on our priority list within our business as well. We are evolving how we perceive value, and I think it'll be interesting to partner with vendors who are also open to commercial models that are a little bit more creative as well.”

The employee experience in an agentic world

The employee experience is a critical, and often overlooked, aspect of agentic AI adoption. While much of the conversation around AI tends to focus on technological capability or business outcomes, it's increasingly clear that the human side of the equation matters just as much.

The future of work will involve not just AI agents working alongside humans, but new dynamics in how teams collaborate, how work is managed, and how employees are supported through transformation.

Some of the most insightful discussions about agentic AI have centered less on the tech itself and more on transformation culture. This reflects a growing understanding that successful AI adoption requires more than tools and platforms; it demands thoughtful attention to how people engage with, learn from, and grow alongside these systems.

“A lot of the big programs here [have] a very human, change management aspect,” Mullaley said. “You need to look at the whole process and the interaction of people with the agents as part of that."

"A lot of the roles do change. People may move from doing more execution, to more planning how the agent will do the task; how you control that, review the output, approve it. The roles that people have as part of the business processes will change.”

He also explained that part of the engagements Capgemini have with clients involve planning the change management aspect of agentic adoption. They focus on communicating with people, educating them on the technology and helping them understand how their roles will change.

“The more people start to explore and engage with the technology and the more familiar they get, the more real it becomes, and they start to understand the benefits a lot more,” Mullaley said.

A new way of buying technology

Agentic AI is changing the way firms buy and use technology. They are democratising technology ownership by enabling teams outside of the firm’s IT department.

“We're moving towards a world where probably one of the best ways, if not the best way, of productising your IP is by having it embellished within an agent,” Falk said. 

“You see companies like S&P, Salesforce, they all have agents now as a way of packaging their data and their expertise. And the beauty of these agents is that they're interoperable.”

“As a user, you're now not buying a software API [application programming interface] or professional service, you're buying an agent. That's how we are looking at it, and we're making a lot of our tools interoperable as well. All our technology tools within AWS can be operated by agents that may be our own or may be third-party agents.”

The operating model of the future

To wrap up the conversation, Maxfield asked each of our panellists what they thought the operating model of the future would look like. How will agentic AI have transformed the world in five years’ time?

Anton shared that CIBC Mellon had been having this discussion with global asset owners recently. “It was a very difficult conversation because, if you think about how much AI has advanced in the last year alone, to start to predict what it's going to look like in the next five years is very daunting,” he said. “You could have almost virtually autonomous organisations running a large volume of your front, back, middle office - everywhere along the continuum. The capabilities are there. Whether or not organisations are ready to dive into that framework is the big question.”

In terms of the biggest use cases for agentic AI, Anton thinks that data insights are going to be a big driver of adoption. For Mabunda, one key area of potential for agentic AI over the coming years is in augmenting the relationship aspect, both externally and internally.

“I'm in corporate investment banking, which is very much historically a relationship based business,” Mabunda explained. “And the perspective has always been you couldn't automate that and it had to be people-to-people. So I'm getting more and more excited around how not to remove the relationship bankers, but how we can couple them with these agents to drive real efficiency.”

“There's a real opportunity here to bridge the gap between experienced, older bankers, with the newcomers into the bank, but driving the same kind of output and effectiveness in market. Coupling them with some clever agents that help bridge the gap a bit more naturally.”

Mullaley referenced the debate around whether or not it’s possible to reach full automation. His background is in post-trade, and as he sees it the move would be towards straight-through-processing (STP) with agents handling all the exception-related tasks. “We're a little way from that,” he concluded.

“At the moment there's still that human-agent interaction people are going to want. Somebody overseeing that process, making the decisions, reviewing the information. But you can see that starting to evolve.”

In terms of the use cases that will present the most opportunity, Mullaley reckons unstructured data is going to be a big one.

“We did one which was quite a common issue about just getting operational client queries by email. The initial step is automate the unstructured to structured data [process] and feed that in. Then it's extracting data to feed into the Ops user to help make the decision.” 

“But you can start to see that evolve to actually crafting the response back to the client. So you can see how you go from automating different steps of the process to, in the end, becoming fully automated. And you can track the ROI around removing Ops effort as part of that process.”

Falk added: “It's going to be up to individual organisations to reimagine their own business processes to take advantage of all this additional capacity that AI offers. So if you take somebody like Bridgewater, they say their AI analyst agent can do the work of a first-year analyst, almost a second-year analyst. They can't do what a senior portfolio manager does and they don't think it'll get there anytime soon.”

“So they're not faced with a world where they have a thousand first-year analysts. They have to think about reimagining their business process with this additional capacity. Some of our customers are coming up with their own answer that fits their own business model, their own industry.”

He shared another example of an investment manager who have automated the writing of 70-page investment research reports across their “entire 2,000 company investment universe”. They then use more agents to analyse these reports and spot common themes among them so they can map how exposed they are to those themes.

“They now have a totally different way of implementing their investment process by having much more of a top-down, macro view of the themes that they might have missed in the past,” Falk explained.

“That's an example of how you might reimagine your business process with all this additional capacity. But that's going to be very different from company to company, and they're going to have to think about it themselves and come up with an answer.”

Conclusion

Our panel discussion made it clear that agentic AI is no longer a futuristic concept, but a real-world technology that is delivering value for many firms. Throughout the conversation, the thinking was less around asking what AI can do and more about how to build, govern, deploy and measure it.

The importance of robust governance and risk management came up multiple times. While this is an area of concern for many firms, our panellists also pointed out that controls already exist to limit what people can do. Those same principles can be adapted for AI - there’s no need to start from scratch.

The conversation also revealed that each organisation’s AI journey is unique. Some are starting with front office use cases, with a view to applying their learnings to the middle and back office. But others have started in Operations and are moving the other way. It’s up to each organisation to decide where agentic AI can be most effective at helping them achieve their strategic goals.

And, while this was a conversation about technology, the human factor remained a dominant theme. Agentic AI is changing everything - not just how people do their jobs, but the jobs that they do, how they interact with each other, and how they buy software.

Ultimately, the ROI of agentic AI extends far beyond cost-cutting. It involves unlocking massive new capacity, driving scale, improving regulatory compliance, and fundamentally reimagining business processes.

The time for experimentation is over; the time for decisive implementation has arrived.