The Australian Securities and Investments Commission (ASIC) has said the finance sector must balance innovation with responsible use of AI usage and reminded the industry governance doesn’t change with new technology.

ASIC chair Joe Longo laid out key points on the current and future state of AI regulation and governance at the UTS Human Technology Institute Shaping Our Future Symposium on Wednesday.
Longo quoted the federal government's interim report on AI regulation, stating, “Existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur.”
He said it was “clear” that “a divide exists between our current regulatory environment and the ideal.”
Looking at the gap between the current framework fails to prevent AI-facilitated harms, Longo made it “clear that any future AI regulatory changes should not be taken to mean that AI isn’t already regulated. It is.”
“Earlier this month Microsoft’s AI Tech & Policy Lead in Asia said that “2024 will be the year that we start to build sensible, safe, and expandable regulation around the use of AI technologies.”
“While I agree with the sentiment, statements like this imply that AI is some kind of ‘Wild West’, without law or regulation of any kind. Nothing could be further from the truth,” Longo said.
He said the interim report noted that “businesses and individuals who develop and use AI are already subject to various Australian laws.”
“These include laws such as those relating to privacy, online safety, corporations, intellectual property and anti-discrimination, which apply to all sectors of the economy,” Longo added.
“For example, current directors’ obligations under the Corporations Act aren’t specific duties – they’re principle-based.
“They apply broadly, and as companies increasingly deploy AI, this is something directors must pay special attention to, in terms of their directors’ duties,” he said.
He said, "The responsibility towards good governance is not changed just because the technology is new."
“Whatever may come, there’s plenty of scope right now for making the best use of our existing regulatory toolkit,” Longo said.
He explained businesses, boards, and directors “shouldn’t allow the international discussion around AI regulation to let them think AI isn’t already regulated.”
ASIC “will continue to act, and act early, to deter bad behaviour whenever appropriate and however caused.”
“We’re willing to test the regulatory parameters where they’re unclear or where corporations seek to exploit perceived gaps.
“Among other things, that means probing the oversight, risk management, and governance arrangements entities have in place. We’re already conducting a review into the use of AI in the banking, credit, insurance, and advice sectors.
“This will give us a better understanding of the actual AI use cases being deployed and developed in the Australian market – and how they impact consumers.
“We’re testing what risks to consumers licensees are identifying from the use of AI, and how they’re mitigating against these risks,” Longo said.
More to be done
Longo said, “Much has already been made of 2024 as ‘the year AI grows up’. Phrases like ‘leaps forward’, ‘rapid progress,’ and others abound, suggesting an endless stream of benefits to consumers and businesses in the wake of AI’s growth.”
Longo said the open question “is how regulation can adapt to such rapidity” with the “clear question” about whether existing regulatory framework is enough to meet the rapidity of that challenge.
“So, even as AI ‘leaps forward,’ at a rate never seen before, questions around transparency and explainability become paramount if we’re to protect consumers from harm – intended or not.
“One question may be, will the ‘rapid progress’ of AI carry along with it the vulnerable man or woman struggling to pay their bills in the midst of a cost-of-living crisis, whose credit score is at the whim of AI-driven credit scoring models that may be inadvertently biased?” Longo said.
Longo also pointed to AI usage in fraud detection and prevention, posing questions on financial consumer difficulties arising around AI-based decisions.
“The point is, there's a need for transparency and oversight to prevent unfair practices – accidental or intended. But can our current regulatory framework ensure that happens? I’m not so sure.”
In summary, Longo said ASIC will remain focused on “the safety and integrity of the financial system” and “positive outcomes for consumers and investors.”
“AI may be able to help us achieve these ends; it can ‘create new jobs, power new industries, boost productivity and benefit consumers’.
“But, as yet, no clear consensus has emerged on how best to regulate it. Business practices that deliberately or accidentally mislead and deceive consumers have existed for a long time – and are something we have a long history of dealing with.
He said for now, “existing obligations around good governance and the provision of financial services don’t change with new technology.”
“That means all participants in the financial system have a duty to balance innovation with the responsible, safe, and ethical use of emerging technologies.
“Bridging the governance gap means strengthening our current regulatory framework where it’s good, and shoring it up where it needs further development.
"But above all, it means asking the right questions. And one question we should be asking ourselves again and again is this: “Is this enough?” Longo concluded.