This blog post was written by Frontline’s Senior Advisor and Head of Equities, Dan Davies, in collaboration with IBM.
Talking to financial sector companies and investors about the future of their technology systems, I have been surprised to discover that one of the biggest attractions of cloud computing for them was something as simple as being able to automate software updates. One might think that the sum of all fears in terms of cyber resilience in the financial sector would be the – so far hypothetical – possibility of hackers either stealing enough to bankrupt a major institution or holding the world to ransom by disrupting the payment system. However, over the last ten years simple botched software updates seem to have been a much bigger threat.
Updating mainframe systems is fraught with danger; it has caused major systems outages at more than one bank in the last decade. Even on desktop systems and applications, unpatched vulnerabilities are one of the most common avenues for attack. If incumbent financial services companies occasionally seem somewhat slow-moving when compared to disruptive fintechs, a large part of the reason why is that this is an industry where technology failures end up on the evening news.
This is why “legacy” systems exist. In most cases where they are found in financial services, the presence of antiquated architectures, COBOL and so on is not the result of laziness or conservatism. Old systems are there because their very age means that they have had a lot of things thrown at them and survived. They are often slow and inefficient, but people are reluctant to change them because they work.
But, the world moves on. A robust system is only robust with respect to a particular environment trying to keep up with changes and migration to the new order. One of the biggest attractions of cloud computing for the financial sector over the last few years has been the chance to move on and redeem decades of accumulated technical debt. Some of the biggest public cloud projects in 2020, have been designed to help rationalise partially-integrated systems in order to help Banks avoid the regulatory ‘trouble’ lists.
As services move into the cloud, the attackable surface of the financial services industry is likely to expand. The more applications which work over networks, the greater the potential for them to be disrupted by malicious actors. The vulnerability of financial firms is exacerbated by two fairly unusual properties of the industry – first, they are special targets for attack, because some of their database entries literally are money – every bit as much as the notes in the vaults.
Secondly and equally importantly, and somewhat less obviously, financial firms are unusual in that they don’t have very many systems which are not mission critical. The responsibility and access to systems which can transfer money, often in large amounts is not confined to senior management. In many cases, the control environment has been designed around limiting how much damage can be done by an employee, but in the new world that corrupt insider could be in a position to compromise the security of the whole system.
This is the kind of challenge which JK Galbraith described as “the art of choosing between the unpalatable and the disastrous”. Migration to the cloud brings risks, while continuing with legacy systems may mean certain eventual failure. The best that can be asked of the financial services industry is that, it should try to follow sensible principles of risk management when thinking about its exposure to the risks of operational resilience.
Even that choice is unclear, however. A sensible principle might be to respect the benefits of diversification. It is notable that fintech firms starting de novo are generally expected to demonstrate that they are not dependent on any single vendor. Certainly, a financial system that has painfully learned the dangers of creating “too big to fail” banks ought to be extremely cautious about allowing the ecosystem to be dominated by a small number of public cloud providers. Although it has been argued that specialists in cloud computing can benefit from economies of scale and their ability to command scarce talent in cyber security and operational resilience, this looks like dangerous thinking from the point of view of the last financial crisis. Nearly every disaster in the financial sector has started with something that was widely believed to be extremely low risk.
It’s for this reason, in my opinion, that hybrid cloud solutions are more likely to end up being the solution for the financial services industry than single-provider partnerships. Some things are big because they are complicated; others are complicated because they are big and many financial services companies are both. A hybrid model allows diversification across platforms, and equally importantly allows workloads to be balanced across private and public clouds depending on how mission-critical it is, and how disruptive the migration might be to clients. It even allows for “legacy” systems to be kept on until their replacements have been thoroughly mapped and tested.