February 2025

How financial services firms can unlock AI’s productivity promise

AI is hard to ignore. Yet despite all the coverage of this incredible technology there is still a lot of confusion over how exactly to use it.

Financial firms are no exception. They know it’s a powerful technology, but with so much excitement and hype around, it’s difficult to focus on what matters.

We brought together a panel of experts, who all have a proven track record of delivering AI in production, to discuss the keys to AI success. Joining Duco founder Christian Nentwich at Innovation Day were:

  • Silvia Prieto, Head of GenAI and ML for Financial Services at AWS
  • Arif Khan, Strategic CSM for EMEA and APJ at Writer
  • Theresa Bercich, Co-founder and CPO at Lucinity
  • Richard Mabey, Co-founder and CEO of Juro

Here are the highlights of their discussion on how to unlock the value of AI for your firm.

The key to getting AI live in production

Christian began by outlining the state of enterprise AI: 80% of all projects fail. This is double the failure rate of IT projects in general. The issue is not that AI doesn’t work, it’s that firms aren’t necessarily applying it to the right problems.

So, what are the right problems? Christian asked our panel what they currently have live in production.

Examples included AI models for contract review, or assembling information for financial crime investigations, or creating reports for relationship managers working with ultra-high net worth individuals.

The answers all had something in common: they were specific use cases with a narrowly defined scope. It was also easy to measure and prove the value being delivered – for example, the number of hours saved versus performing a task manually.

So, AI is having a tangible impact for many businesses right now. But how can you ensure successful adoption across your own organisation?

Driving AI adoption

Two key themes emerged from the conversation around adoption: mindset and trust.

The path firms will tread depends on their mindset. Some firms are willing to rethink their processes entirely to best utilise AI. They’re looking to embrace full automation from the start. Most customers have a more cautious approach, however. They start small, tweaking processes to leverage AI and expanding the scope of the project as they see what’s possible.

A vital step here is ensuring that leadership is thinking about AI in the right way. It’s important to educate your executives so that they understand the need to change ways of working. Only then can they fully embrace a transformation mindset.

Issues around trust, though, can slow down adoption. Even the firms willing to completely embrace AI need to move in phases because, like any transformation, people are resistant to change.

How do you overcome this? One of the issues is that the way software is traditionally sold is by telling people they can speed up their current tasks. The message is very clearly to keep doing what they’re doing.

It’s different with AI, which is going to remove those tasks entirely. People are naturally worried about their jobs and so trust in the technology can be low.

The flipside is that AI can upskill people – removing the laborious tasks they were doing and instead giving them knowledge and expertise in operating a cutting-edge tool. Staff are freed up to work on tasks that have a greater impact on the organisation, improving productivity.

This happens when you put the effort into empowering users to really explore what’s possible with the tools. Doing this both creates trust and fosters a more positive mindset towards AI. That’s how to ignite the spark of transformation.

Education and understanding are clearly important, but how much do users really need to know?

How well-versed in AI do users need to be?

“One thing that’s hard about AI still is that it gets super technical quickly. That’s difficult for the people who have the business problems to understand,” Christian said. (This is a sentiment that was echoed in our COO transformation panel, where our experts pointed out that leaders are problem solvers; they don’t want features, they want solutions).

There will inevitably be some fine-tuning of models or prompting involved when using an AI tool. So how much technical knowledge should business users have?

The panellists emphasised that there’s not a one-size fits all approach with AI. The level of knowledge required will depend upon what’s being done and how. Some firms buy models and try to fine-tune them themselves, but it can become complicated very quickly. This requires technical knowledge, but it would be data scientists, not business users, doing this.

Part of the key to solving this is to understand what kind of AI is needed for any task. Some use cases require the user to interact with the AI directly. For instance, looking at a graph database to investigate very complex money flows.

Some users may find that their job description changes. They become the managers of the AI that is now automating the tasks they used to perform. This requires some understanding of how it works and how to keep it running efficiently.

But at other times AI should just work in the background – generating the aforementioned graph, for example. In these cases, users shouldn’t even need to care how the AI is implemented. Of course, it needs to have gone through the necessary information security reviews and other checks around where the data is going and how it’s being used.

Which leads us nicely onto how to ensure you’re getting the right AI for the job, from the right people and with the right guardrails.

Evaluating AI – from models to vendors

There are three main criteria to consider when it comes to AI models: cost, latency (the time it takes to process input data and generate an output) and accuracy.

You can’t have the best of all three at once. For example, you can have a large model that has high accuracy and good latency, but the cost will be high. Meanwhile, a small model may be cheaper and faster, but the accuracy won’t be as good.

When it comes to evaluating models, it’s important that the business users are involved. They are the ones who have been using the data and performing the tasks, sometimes for years. They are best placed to know what the desired output should be, and therefore the ones who can really put an AI model through its paces.

Once a model is in production it’s possible to create automation that evaluates the output and checks for unacceptable variations in output. But you should always have a human in the loop to check a model before it goes into production.

Evaluation goes beyond the model; vendors should be assessed too. Things to consider include:

  • Where has the vendor already been successful with your use cases?
  • What do they provide beyond just the software (i.e. support)?
  • What does their onboarding journey look like? Who are the people who handle onboarding, and what are their backgrounds?
  • Are there any existing customers you can speak to?

According to our panel, credible vendors should be eager to answer these questions.

Agents – the future of AI?

Finally, the panel ended with a discussion about agents. These are AI tools that can take actions independently. For instance, as discussed in our article on generative AI for intelligent document processing, an agent could recognise an error in a document, connect to an email system and automatically send a message asking for clarification.

The panel all agrees that agents are the future. The lesson here – as with any other type of AI – is that it depends on the use case and what you trust the machine to do. As Richard points out, you could trust a machine to accurately reply with a redline on a non-disclosure agreement (NDA). That’s because these types of documents are incredibly simple. But would you trust an agent to accurately mark up a sales and purchase agreement (SPA) for a global merger?

One of the big areas of focus now is the triaging of recommendations. How does the machine know when it can take an action, and when a human should get involved?

The panel says the industry is not mature enough to do agentic controls at scale. The future is not one agent, but a hierarchy – agents controlled by agents. What’s important here is the controls, making sure the agent can’t just do whatever it likes.

Agents are clearly powerful tools that will unlock significant productivity gains – providing a human remains firmly in the loop where necessary.

Getting AI right

AI is already having a notable impact on the financial services industry, and the momentum continues to build. But hype and confusion are making it hard for firms to zero in on the use cases that really matter.

The answer is to get curious – investigate the market, interrogate vendors, ask your subject matter experts what they think. While the technology continues to advance, the FS industry needs to focus on the basics.

Only then will they be able to unlock the transformational power of AI.