APRA issues caution on AI rollout in financial sector
The prudential regulator says humans must still be in control of systems using AI.
Banks, insurers, and superannuation funds have been warned to be wary about their introduction of artificial intelligence powered systems, but the prudential regulator says it has no plans to introduce new rules around the technology.
Australian Prudential Regulation Authority member Therese McCarthy Hockey said the agency believe its current rules were appropriate to manage the rise of AI-powered systems within the financial systems, but said it would weigh in on consultations for national rules.
Speaking at an Australian Finance Industry Association event on Wednesday, Ms McCarthy Hockey said APRA’s prudential framework already had regulations “to deal with generative AI for the time being” noting although they didn’t specifically refer to AI “nor do they need to at the moment”.
“They have intentionally been designed to be high-level, principles-based and technology neutral,” she said.
“So while we are watching closely, we are confident for now that we have the tools to act, including formal enforcement powers, should it be necessary to intervene to preserve financial safety and protect the community.”
This comes after APRA offered guidance to the financial sector last August to “tread carefully” when using AI technologies, amid a rollout of new systems across insurers, banks, and super funds.
ASX-listed QBE revealed in early May it had plans to scale up the use of AI across the insurer, after making investments in generative platform Snorkel AI in January.
Ms McCarthy Hockey said AI had huge opportunities but also risked amplifying potential downsides.
“APRA’s message to the entities we regulate is that firm board oversight, robust technology platforms and strong risk management are essential for companies that want to begin experimenting with new ways of harnessing AI,” she said.
The APRA member, one of the governing members of the regulator charged with the stability of the financial system, said AI could see companies make better decisions and improve their risk management.
But she noted global regulators were growing increasingly concerned about the use of AI systems, warning they still demanded human oversight.
Ms McCarthy Hockey said failures around AI “would undermine public trust” noting this presented a “risk for financial stability”.
“Companies cannot delegate full responsibility to an AI program. This becomes even more important when we consider that generative AI will involve automated decision-making,” she said.
“Entities must have, to use the industry jargon, a “human in the loop”: an actual person who is accountable for ensuring it operates as intended.”