A newly established artificial intelligence group has been created as part of the federal government's interim report response.

The Artificial Intelligence Expert Group has been designed to offer advice to the department of industry, science and resources around transparency, testing and accountability and cover selections for AI guardrails in high-risk settings.
The new group comes in response to the federal government's Safe and Responsible AI in Australia consultation [pdf] released in January this year.
The consultation discussed placing mandatory guardrails for AI development and deployment in high-risk settings, alongside the setting up of the new group.
It’s expected the group will be in place until 30 June 2024 however, the government said it is considering longer-term preparations.
Minister for industry and science Ed Husic announced the group on Wednesday that the group “brings the right mix of skills to steer the formation of mandatory guardrails for high-risk AI settings.”
Husic said, “With expertise in law, ethics and technology, I’m confident this group will get the balance right.”
“It’s imperative sophisticated models underpinning high-risk AI systems are transparent and well tested.
“Last month I announced our interim response to the Safe and Response AI consultation, and now we’ve delivered this impressive group to work on next steps,” Husic said.
The group of 12 has already begun work with the first meeting taking place Friday 2 February 2024 with members ranging from industries from law, ethics to technology.
- Professor Bronwyn Fox: CSIRO Chief Scientist, represents Australia on the panel overseeing the international Frontier AI State of the Science report.
- Aurélie Jacquet: A leading figure in the development of responsible artificial intelligence systems, Chair of Australia’s national AI standards committee, OECD expert on AI risks, and advisor on international AI certification initiatives.
- Dr Terri Janke: An international authority on Indigenous Cultural and Intellectual Property (ICIP).
- Angus Lang SC: A leading legal practitioner and sought-after contributor on intellectual property law and AI, addressing developments in Australia and Europe.
- Professor Simon Lucey: Director of the Australian Institute for Machine Learning at the University of Adelaide, with a background in artificial intelligence, autonomous vehicles, and research spanning computer vision, machine learning, and robotics.
- Professor Jeannie Paterson: Founding co-director of the Centre for AI and Digital Ethics, and leading contributor to legal and regulatory reform processes in Australia and internationally.
- Professor Ed Santow: Co-founder of the Human Technology Institute, leading major initiatives to promote human-centred artificial intelligence.
- Professor Nicolas Suzor: A Future Fellow at QUT, a Chief Investigator of the ARC Centre of Excellence for Automated Decision-Making & Society, and an expert on the governance of digital technologies.
- Professor Toby Walsh: Widely recognised voice on AI development, with leading roles at Data 61 and UNSW, and numerous international fellowships.
- Professor Kimberlee Weatherall: A Chief Investigator with the ARC Centre of Excellence for Automated Decision-Making and Society.
- Professor Peta Wyeth: An internationally recognised researcher on human computer interaction, human-centred artificial intelligence, and design practice and management.
- Bill Simpson Young: Co-founder and CEO of Gradient Institute, founded to accelerate the ethical progress of AI-based systems, and leading technologist for the safe and responsible use of AI.
“Australia is being left behind”
Despite the news, the fresh AI group is already facing criticism from some Coalition members who said the team missed the mark in establishing Australia as an AI world leader.
In a joint statement shadow minister for the digital economy Paul Fletcher and shadow minister for communications David Coleman raised concerns over the longevity of the group.
Fletcher said, “After a lengthy consultation and industry submissions, the best Labor can do is announce an advisory body.”
He added, “The new advisory body is comprised of esteemed individuals but how can the public and industry expect serious public policy advice to be given to government on AI when the body is scheduled to cease on 30 June 2024?
“AI will transform our economy and it is critical that our regulations keep pace with this fast moving and evolving technology.
“But what we’re seeing from the Labor is an incompetent and slow government that is allowing Australia to be left behind other countries when it comes to AI,” Fletcher said.
Fletcher said he understood “the government needs to be alive to potential risks and the implications for our regulatory frameworks” however it was important attention was focussed on “ensuring Australia does not fall behind globally in this competitive field.”
“Australia produces fewer than 200 PhDs a year in AI. This isn’t enough and we must grow our skilled AI workforce for Australia to stay competitive as AI continues to be taken up across the economy,” Fletcher said.
Coleman added “Holding roundtables, commissioning reports and announcing advisory bodies is not the dynamic action that is required on such a critical issue.
“Australia is being left behind by other countries on developing serious policies related to AI.
“We need clear policy to ensure we capture the immense opportunities of AI, and address the risks,” Coleman concluded.