September 2019

Reengineering Financial Operations with an Eye on Data Integrity

Given the increasing cost pressures and the ever-important choreographic ballet of electronic messages among client, firm, trading venue, custodian, clearing bank, and clearing utilities, the seamless processing of financial information has become increasingly critical. Unfortunately, most companies’ operating infrastructures and the collective financial industry processing ecosystem continue to leak, says Larry Tabb. But easy-to-implement AI-based operational infrastructure that leverages machine learning can help firms ingest, compare, and spot differences in data, improving data integrity.

Around a decade or so ago, Till Guldimann, one of the luminaries behind the advent of Value at Risk at JP Morgan and the retired vice chairman of SunGard (now part of FIS), raised the issue of algorithmic operations: the ability to leverage analytics and algorithms to make financial institutions’ operations more efficient. Given the post-crisis atmosphere – a time when banks, brokers, and insurance companies were unsure of their financial health and a few were on the crux of failure – Guldimann theorized that leveraging the same advanced technologies used to trade could help solve financial institutions’ operational challenges.

While solving post-trade operational problems seems pretty simple compared to determining what asset to buy, when to trade, and how much of the organization’s capital to put at risk, in reality, solving operational problems is, in many cases, much more challenging than determining the relative value of one or more over- or undervalued assets.

Understanding just why solving these operational issues is so challenging requires a bit of insight into financial institutions’ technology infrastructures.

Just like our markets, many firms’ operational infrastructures are fragmented. Banks and financial institutions literally have grown up through the “integration” of fragmented back-office infrastructures. Whether by acquisition, a best-of-breed technology acquisition strategy, or a parallel and loosely coordinated internal development technology strategy, virtually all financial firms have a hodgepodge of technological infrastructure. The heterogeneity of firms’ infrastructure, compounded by the complexity of data managed by these systems, requires firms to invest in solutions to integrate these complex and heterogenous data structures to make the fragmented infrastructure behave as a single operating fabric. This not only is expensive and hard to accomplish; given the increasing cost pressures and the ever-important choreographic ballet of electronic messages among client, firm, trading venue, custodian, clearing bank, and clearing utilities, the seamless processing of financial information among both tightly and loosely integrated processing systems has become increasingly critical.

While it may seem self-evident that an efficient and seamless data and processing infrastructure would be a first-order priority for firms to improve their operations and customer satisfaction, plugging these operational leaks is challenging. Certainly, if every transaction created was problematic, these issues would bubble to the surface and quickly be corrected. But many of these discrepancies, while they may occur frequently, are not consistent and often are generated from non-standard transactions/data that in many instances are hard to determine, difficult to remedy, and virtually impossible to secure the technical resources needed to fix.

In addition, the inability of financial institutions to plug these holes has created various stopgap solutions that, in many cases, compound the investigation and resolution of these issues.

Further, while two decades ago many firms’ operational staff were co-located with or geographically adjacent to their business groups (portfolio managers, traders, and bankers), today many of these operational centers have been moved offshore or to “near-shore” but lower-cost processing centers. Though having operational staff working in counter-cyclical time zones may enable the operational challenges created today to be fixed tonight by professionals located half-way around the world, in many instances, this time shifting has made it difficult for the operational and business teams to effectively communicate. This hampers the resolution of and increases the time delays associated with problems that require the collaboration of two, three, four, or even five parties spread across various divisions, geographies, and even institutions to identify the problem, determine a resolution and implement a fix.

Which brings us back to Mr. Guldimann. Till believed that if we could just focus on these problems, many of them could be spotted, analyzed, and even resolved using the same pattern-recognizing and advanced technologies used to trade. Unfortunately, it’s a decade later, and while firms do a fairly good job at finding and plugging some of the largest operational gaps, most companies’ operating infrastructures and the collective industry processing ecosystem continue to leak.

Technology begins to catch up

The industry has attempted to leverage technology to solve some of the operational challenges. First, workflow-oriented business processing automation (BPA) virtualized multiple back-end systems into a single bridging infrastructure. Then robotic process automation (RPA), which sat on this virtualized environment, created simple macro-type tools that allowed high-powered users to develop scripts to resolve routine problems. While BPA and RPA have helped streamline operations, however, the problem of a leaky processing environment has not been solved.

One of the most significant challenges in this age of electronic trading is maintaining the integrity of data stored in multiple places. With data moving through clients, brokers, exchanges, custodians, clearing banks and central counterparties – and the number of internal platforms at each of these organizations proliferating – the disparate systems that are capturing and reconciling the same data from multiple perspectives are almost too numerous to count. And when these systems, which often process hundreds of thousands or even millions of daily transactions, fall out of alignment, the difficulties of simply finding the causes can be a nightmare.

Just reconciling two similar but divergent sets of information can be challenging, even for a computer. And automating the process can introduce additional complexities, particularly across divisions or organizations that do not share a consistent symbology. We need an additional logic layer to understand small but critically important data nuances, such as counterparty/customer names (think: JP Morgan, J.P. Morgan and J.P. Morgan Chase). While new Legal Entity Identifiers (LEI) have been defined, not all firms, systems, and protocols have adopted them. And this extends past legal entities, CUSIPs and ISINs, to prices, yields, terms and conditions, and many other non-standardized data elements. This analysis needs a higher level of intelligence than a simple ASCII or Microsoft “compare” function will provide.

Enter: artificial intelligence and machine learning. Firms such as DUCO are offering easy-to-implement AI-based operational infrastructure that leverages machine learning strategies to help financial firms ingest, compare, and spot differences in data and queue tasks either to help investigators fix complex operational challenges or automatically fix some of the more basic and conclusive discrepancies, helping improve data integrity. Crucially, newer technology means that standardization of data is no longer a pre-requisite, as systems’ abilities to adapt to arbitrary data improve, meaning that it is becoming possible to work with the legacy infrastructure without the nightmare of resource prioritization, budgeting, and implementation typically involved in the authorization of technology projects in this day of heightened budgetary oversight and governance.

Given that our financial world is becoming even more cost-constrained, interconnected and electronic, the importance of plugging these transactional leaks, operational differences and technological misalignments only becomes more important. While streamlining or thoroughly integrating the vast number of technology islands that comprise a modern financial markets organization would be fantastic – as would migrating to a blockchain-type environment where everyone reads off the same record and reconciliation errors were completely eliminated – unfortunately, that day is far away. In the meantime, firms continually need to improve their operational efficiency, reduce cost and improve their customer interaction. To do this, firms increasingly will need to leverage intelligent infrastructure to aggregate, normalize, reconcile, and fix their operational discrepancies.

Until firms either can fully integrate their heterogenous environments or migrate completely to blockchain, financial institutions will not be able to attain the clarity needed to power the next level of algorithmic operations that Till Guldimann foresaw or the level of operational efficiency needed to power the next generation of global financial institutions. Operational AI and machine learning are the first technologies that truly are moving the industry toward this goal.