The writer is the US acting comptroller of the currency
In 1961, Popular Science magazine envisioned self-driving cars. The reality arrived sooner than anyone anticipated, and before safety regulators could adapt. Most automotive laws — on speed limits, giving signals, drink-driving — had been designed to protect against dangerous drivers, not dangerous cars. Autonomous vehicles brought new risks that legacy rules never considered. As one headline on the Wired website put it: “Who’s Regulating Self-Driving Cars? Often, No One”.
Banking is headed down the same road. And it’s being driven by the technology behind decentralised finance, or DeFi. But just as the original rules of the road protected us from other drivers, so our current bank regulations exist mainly to prevent human failings.
At the US Office of the Comptroller of the Currency, we require every bank to have officers responsible for its safety — such as a chief risk officer and a chief audit executive. We limit how much banks can lend to their directors. We even make some bank employees take a certain amount of vacation so others can sit at their desks and identify potential fraud. We call it bank regulation, but we’re really regulating bankers.
DeFi turns all this on its head. It leverages blockchain technology to deliver services with no human intermediation. One example is creating money markets with algorithmically derived interest rates based on supply and demand — rates that traditional banks set by committee. Other DeFi projects include decentralised exchanges that allow users to trade without brokers, and protocols for lending that do not involve loan officers or credit committees. Although these “self-driving banks” are new, they are not small. They are likely to be mainstream before self-driving cars start to fly.
However, self-driving banks present the same challenges and opportunities as autonomous vehicles. On the opportunity side, they can allow savers to stop shopping around for the best interest rates by having algorithms do this for them. They can also end discrimination against certain borrowers by having software make credit decisions. They could even eliminate the risk of fraud or corruption by no longer being run by humans at all.
Self-driving banks also present new risks, though. If technology accelerates withdrawal of depositors’ funds, just as high-frequency trading can accelerate equity sell-offs, that could increase liquidity risk compared with traditional banks. Asset volatility could be a concern for similar reasons. And the management of loan collateral could be more difficult if humans are not involved in valuations.
There is also a risk that, in the absence of federal regulatory clarity, US states rush to fill the void and create a patchwork of inconsistent rules that impede the orderly development of a national market. This is exactly what happened with self-driving cars.
Federal regulators must therefore determine what a regulatory scheme for self-driving banks should look like. Could they ensure fair treatment of customers by such a bank? Sure. Most bias and compliance issues are failures of software. Not the software you code, but the kind hard-wired into human brains. Bias can creep also into the rules of algorithms, but it is easier to root out.
Could regulators properly examine a bank that exists only as software? Yes, we can. It may be easier than supervising banks today. Our examiners could be retrained to read the algorithms that make deposit pricing or credit decisions and work out whether they comply with legal requirements.
Could regulators ensure self-driving banks properly serve their communities? Absolutely. Their greater efficiency would free significant amounts of capital that is lost to operating costs today or slowed by decisions dependent on human grey matter. Of course, algorithmic banks would change the nature of employment in the financial sector — with far fewer bank tellers and more coders. But creating better compensated and value-added jobs may prove a societal benefit in the long run.
Could the OCC even grant a national bank charter to open-source software that manages deposit-taking, lending, or payments, if it doesn’t have officers or directors? Not yet. Under current law, drawn up on the assumptions of the early 20th century, charters can only be issued to human beings. But those antiquated rules should be revisited, just as regulations that still mandate the use of fax machines should be.
Could we usher in a future where we eliminate error, stop discrimination, and achieve universal access for all? Optimists like me think so. How different would banking in the US be today if regulators, bankers, and policymakers were as bold as carmakers 10 years ago?