The federal government is considering placing mandatory guardrails for AI development and deployment in high-risk settings plus will create a new advisory group, according to its interim report response.

The government said the protections may be established via changes to existing regulations or the creation of new AI-specific laws.
It released its interim response to the Safe and Responsible AI in Australia consultation on Wednesday, stating its reasoning is targeted towards the use of AI in high-risk settings.
The government said while further consultation will take place, its “immediate actions” include creating a voluntary AI Safety Standard plus the voluntary labelling and watermarking of AI-generated materials.
It will also seek to establish an “expert advisory group” to support the development of options for mandatory guardrails.
Requirements of possible mandatory guardrails could cover safety testing products and transparency of AI usage.
Guardrails will also include accountability, covering “training for developers and deployers of AI systems, possible forms of certification, and clearer expectations of accountability for organisations developing, deploying and relying on AI systems.”
The Safe and Responsible AI in Australia consultation [pdf] was released in June 2023 and outlines “opportunities and challenges associated with AI”, noting benefits of the technology “are not yet fully understood” and the serious risk of deepfakes and misinformation.
Ed Husic, Minister for Industry and Science said, “Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled.”
“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI,” he said.
“The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.
“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Husic concluded.
Industry response
The Australian Academy of Technological Sciences and Engineering (ATSE) welcomed the release of the interim report, stating it “identifies future actions including expert testing of AI systems and public reporting on AI system usage.”
“ATSE has also urged strengthening misinformation and disinformation laws and welcomes future reforms suggested in the interim report,” the organisation said.
Dr Katherine Woodthorpe, ATSE president said, “It is also essential to involve and consult consumer advocates when setting risk thresholds for AI applications."
“Their inclusion will ensure a balanced and comprehensive approach to AI regulation, reflecting a wide range of societal needs and concerns.
“This is Australia’s AI moment. With the right actions, we can lead in both technological and regulatory innovation in AI, setting a global standard for responsible and effective AI development and deployment,” said Dr Woodthorpe.
Kate Pounder, Tech Council CEO also said "Providing clarity on the Australian government’s approach to AI regulation is good for business and consumer confidence."
“It means businesses can better plan for building, investing in and adopting AI products and services and the public can take confidence that AI risks are being safely managed and regulated in Australia,”
“This approach is the best way to balance innovation with the need to ensure that AI is developed safely and responsibly," said Pounder.
Professor Geoff Webb, department of data science and AI and faculty of information technology at Monash University said the response “strikes a measured balance between controlling risks and stifling innovation”.
“An important next focus will need to be education, both of the public so that they understand how to safely interact with the new technologies and also of experts so that we can best deploy the technologies in the national interest,” Webb said.
Dr Dana Mckay, senior lecturer in innovative interactive technologies at RMIT University said “AI is affecting more of people’s lives than they realise.”
“Ultimately, AI is a tool like any other and needs principles-based legislation to ensure that it is beneficial for all of Australian society, not just those who benefit most from productivity gains or those who own the technologies,” Mckay said.
“A vanilla booklet”
Despite most industry responses welcoming the interim report, David Coleman, shadow minister for communications and federal member for banks said time should have been spent elsewhere.
“The government should have spent this time developing serious policies to deal with the giant issues heading our way on artificial intelligence,” Coleman said.
“Instead, we have a vanilla booklet which just kicks the can down the road.”
He said other countries across the globe “are leaving Australia behind in planning for AI.”
“In Australia, under the Albanese government, there is too much drift with a lot of talk about roundtables, guardrails and watermarks, but no actual outcomes.
“Not only do we have no outcomes – there is not even a timetable for any future outcomes,” he said.
Coleman stated Australia risks being “left standing still when it comes to the effective management of what is the next great industrial revolution.”