Stay up so far with free updates
Simply log in British banks myFT Digest – delivered straight to your inbox.
The rapidly increasing use of generative artificial intelligence by lenders is creating recent risks to the economic system and could possibly be included in annual stress tests that test the resilience of the sector, a deputy governor of the Bank of England has said.
“The power and use of AI is increasing rapidly and we must not be complacent,” Sarah Breeden said on Thursday, adding that while the British central bank was concerned, it was not prepared to alter its approach to regulating generative AI .
According to a recent BoE survey, around 75 percent of economic firms are using the rapidly evolving technology – up from 53 percent two years ago – and greater than half of use cases feature some level of automated decision-making.
Generative AI systems spit out text, code and videos in seconds, and Breeden said the central bank was concerned that AI, when used for trading, may lead to “sophisticated types of manipulation or crowded trades in normal times, exacerbating market volatility.” “. Stress”.
The BoE could use its annual stress tests of British banks, which assess how well lenders are prepared for various crisis scenarios, “to grasp how AI models used for bank or non-bank trading might interact with one another,” Breeden said . Who monitors financial stability on the central bank?
The BoE is organising an “AI consortium” with private sector experts to review the risks.
Breeden warned: “If such crowded trades are financed through leverage, a shock that leads to losses in such trading strategies could possibly be amplified into much more serious market stress through feedback loops of forced selling and opposed price movements.”
Her comments at a conference in Hong Kong follow the IMF's warning in its financial stability report last week that AI may lead to faster swings in financial markets and greater volatility under stress.
Breeden, who took up her post in November last yr, said rules that hold senior bankers more accountable for the areas they oversee could possibly be adjusted to make sure they’re held accountable for decisions made by AI. systems are made autonomously.
“In particular, we want to give attention to ensuring that managers of economic firms are in a position to understand and manage what their AI models are doing as they evolve autonomously beneath their feet,” she said.
While most applications of AI in financial services “pose relatively low risk from a financial stability perspective.” . . “More and more significant use cases are emerging from a financial stability perspective,” equivalent to credit risk assessment and algorithmic trading.
In its survey, the central bank found that 41 percent of firms surveyed use AI to optimize internal processes, greater than 1 / 4 for customer support and no less than a 3rd to combat fraud.
The survey found that AI is getting used by 16 percent of firms to evaluate credit risk, with one other 19 percent saying they plan to accomplish that in the following three years.
Eleven percent of the groups used the technology for algorithmic trading, with one other 9 percent planning to adopt it for this work in the following three years.
According to Breeden, half of AI usage by financial firms was roughly evenly split between “semi-autonomous decision-making” with some human involvement and fully automated processes without human involvement.
“This clearly presents challenges for the management and governance of economic firms, in addition to for regulators,” she said.