A lot of a firm’s issues with data quality come back to their legacy on-premise technology. Technology like this is inefficient, expensive and inagile. For starters, it requires you to own and manage the hardware it runs on, and therefore takes up valuable IT resources to maintain.
It costs a lot of money thanks to things like mandatory upgrades, and once established it becomes very sticky. Ripping out an on-premise system in favour of another takes the kind of time and resources firms don’t have – and vendors know this.
These systems don’t scale well – first up, they have the allotted computing power necessary for the average volume of tasks required of them. In the case of a spike in activity, you get blockages because you don’t have the bandwidth to run everything you need to.
But adding capacity isn’t easy – it means taking up space in your data centre with more hardware. If the tasks you need the on-premise system for can vary in volume significantly, that means having spare resources sitting around waiting for the time they’re needed.
On-premise systems also invite complexity into your organisation. It’s not uncommon, for example, for multiple teams in a firm to have their own separate instances of a particular system. Because the technology runs on your own infrastructure, updating and upgrading it is a complex endeavour, involving costly (but often mandatory) support from the vendor.
And it’s made even more difficult due to the likelihood your vendor has customised it for you. Each upgrade has to be done on a one-by-one basis.
These systems are usually hard-coded and rely on data schemas, too, meaning not only does IT have to maintain the hardware, but they have to build the processes that run on it.
Sunset scare stories
One of the biggest challenges with legacy on-premise systems is when the vendor sunsets them. You usually have two choices when this happens: spend a lot of money upgrading in a very short timeframe to their new system, or hiring in more IT resources to keep the old system alive yourself.
The inflexibility of these systems means that firms will often have multiple systems running for different use cases. This creates a complicated web of point solutions, and yet there will still be plenty of gaps where manual work is required.
The data landscape in financial services is simply too vast for it to be feasible to run a different solution for every single use case or nuance – not that firms haven’t tried…