The Treasury Committee called on Monday for evidence from the finance industry, the AI sector, consumers and experts on how financial services can use AI while protecting consumers and safeguarding financial stability. The deadline for submissions is March 17, the committee said.
The parliamentary committee noted the Bank of England's recent findings that 75% of financial services firms use AI, with a further 10% more planning to do so in the next three years. But market reaction in January to DeepSeek's emergence showed the volatility and rapidly evolving nature of the AI market.
"Successive governments have made clear their intention to embed and expand the use of AI to modernize the economy," the Treasury Committee chair, Labour MP Meg Hillier, said in a statement. "My committee wants to understand what that will look like for the financial sector."
Hillier said that the City should capitalize on innovations in AI, but with safeguards.
"MPs also want to understand what safeguards may be needed to protect financial consumers, particularly vulnerable ones who may be at risk of bias," the committee said.
The committee said it will review how far AI could jeopardize financial stability, questioning whether there are increased cybersecurity risks. The inquiry will consider how AI is used in different sectors of financial services and how this is likely to change over the next 10 years.
The MPs will consider whether some financial services are adopting AI more quickly and whether financial technology companies are better suited for it. The committee will also consider what percentage of trade is driven by algorithms.
The inquiry will consider how far AI can improve productivity in financial services. This will cover what transactions may benefit and the main barriers to adoption, as well as any areas where generative AI — which creates new content from existing data — from providers such as ChatGPT or DeepSeek could be used with little or no risk.
Another area of investigation will be whether AI in financial services will cause job losses and, if so, where. The committee will also scrutinize whether the U.K.'s financial sector is well-placed to use AI compared with other countries.
The risks to financial stability will be another area of focus. This includes whether AI increases cybersecurity threats, and the risks from dependence on external parties and from model complexity.
The probe will consider the risks of GenAI hallucination, where model outputs are nonsensical or false, and how far AI is linked to herding behavior, in which many investors imitate the transactions of others.
Also under consideration will be the benefits to consumers from AI in financial services, including in identifying and helping vulnerable customers. Another issue is whether AI is likely to be more biased than humans.
The inquiry will focus on the data-sharing needed to make AI more effective in financial services and any related need to change legislation, as well as data protection concerns. The safeguards needed to protect customer data and prevent bias will come under scrutiny.
The committee will consider how the government and financial services can strike the right balance between seizing the opportunities of AI and protecting both consumers and financial stability. It may consider the need to change regulations because of AI.
Another consideration will be whether the government and regulators need additional information, resources or expertise to monitor and regulate AI use in financial services.
--Additional reporting by Georgia Kromrei. Editing by Robert Rudinger.
For a reprint of this article, please contact reprints@law360.com.