The Reserve Bank of New Zealand has cautioned about the rapid integration of AI in financial services, highlighting the potential risks to financial stability. The central bank’s May 2025 Financial Stability Report portrays AI as both a transformative tool and a source of new challenges within the financial sector.

AI technologies are being adopted at an unprecedented pace, with advanced models enhancing productivity, improving modelling accuracy, bolstering risk assessment capabilities, and strengthening cyber resilience. These innovations help financial institutions detect and manage threats more effectively, potentially reshaping the efficiency and security landscape within the finance industry.

Despite these benefits, the report identifies significant vulnerabilities associated with AI deployment. It points out risks such as errors within AI systems, data privacy concerns, and possible market distortions. The growing dependency on a limited number of third-party AI providers could lead to market concentration, which may create new contagion pathways and increase the impact of cyberattacks.

“There is still considerable uncertainty around how AI will shape the financial system,” said Reserve Bank of New Zealand financial stability assessment and strategy director Kerry Watt. “While its impact could be positive, especially in enhancing resilience, it could also introduce or amplify vulnerabilities.” 

The report underscores that financial institutions must comprehend and mitigate AI-related risks as part of their regulatory obligations. It stresses the necessity for regulatory frameworks to evolve alongside technological advancements to support effective risk management. Continuous monitoring of AI technology developments is essential to ensure stability within the financial system.

The concentration of market power among a few key third-party providers introduces additional systemic vulnerabilities, especially in environments with limited competition. Inaccurate AI models can lead to substantial consequences, for example, shared faults in models used by insurers could result in major under-pricing of policies and unexpected losses during claims settlements.

AI models can inherit biases present in historical data, leading to flawed outcomes such as biased loan origination standards that heighten credit risks. Additionally, there is concern about AI systems becoming misaligned when their objectives diverge from human intentions. An AI designed to maximise profits might exploit policy loopholes or engage in anti-competitive practices without adequate oversight, negatively affecting financial inclusion.

Cybersecurity threats posed by evolving generative AI technologies

AI models face vulnerabilities related to cyberattacks where attackers could manipulate data or model parameters to extract sensitive information. As generative AI evolves, it could lower barriers for potential attacks and enhance their sophistication through methods such as effective impersonation. The report emphasises the need for vigilant monitoring and adaptation by regulatory bodies to shield against these evolving threats.

Meanwhile, Bank of England Governor Andrew Bailey, in a March 2025 address, highlighted AI as a potential catalyst for economic growth, comparing its transformative potential to historical innovations like electricity. During his speech at the University of Leicester, Bailey suggested that AI could boost long-term growth rates and enhance national income, particularly as the UK economy continues to experience sluggish growth.

Read more: AI adoption could shape UK’s economic growth, says Bank of England chief