Australian government organisations are slowly coming to grips with artificial intelligence (AI) technologies and are increasing investments to improve service delivery, drive automation and efficiency, and deliver better citizen experiences. But the rise of generative AI, with models like ChatGPT, has created a new challenge and requires a new mindset.

While the use of generative AI in the commercial sector is starting to ramp up, with gains from high-risk innovation outweighing potential problems, the Australian public has much higher expectations of government — including ethical unbiased behaviour, accountability, transparency and privacy. Even though there has been a lot of interest from governments across Australia, the adoption of generative AI remains problematic.
The work experience intern
The biggest problem for government organisations is that generative AI uses mathematical algorithms to generate a usable or effective response to a prompt, not actual cognitive models.
A good way to think about generative AI is as a well-read, articulate, but inexperienced intern who hasn’t signed a non-disclosure agreement. You wouldn’t give them access to any sensitive information and you’d make sure a more experienced staff member reviews their output before it’s published or used externally.
Unfettered use of these platforms creates potential risks around disclosure, exposure of policy thinking, inadvertent bias and use of copyrighted content within government records or reports.
Secretary of the department of home affairs Mike Pezzullo recently told a Senate Estimates hearing that employees are currently barred from using ChatGPT and recommended a whole-of-government approach be established on the technology first.
For now, the department is exploring ChatGPT capabilities as a tool for experimentation and learning, said chief operating officer Justine Saunders.
This is a good starting point for any government organisation considering the use of generative AI. Explore the limits of the technology in a low-impact way, where the value delivered far exceeds the residual risk.
Despite these challenges, generative AI applications open the door to innovation across government. Used in the right way, the technology works well in drafting multiple forms of the same communications in different languages, or for different demographics.
It can be used to summarise long or complex cases to support improved decision-making and provide guidance to customer service agents in responding to complex queries. Generative AI also can classify and collate large volumes of unstructured text to improve the quality of data used by policymakers.
Impact on government operations
While governments have been wrestling with what to do with generative AI internally, they have been more concerned about its overall impact on society. Italy became the first Western country to block ChatGPT entirely – a ban lifted a month later in April when new data protection features were added.
While not going into the concerns of legislators, it is important for government organisations to consider how generative AI will be used by the communities they serve and the impact that could have on their operations.
Citizens might take it into their own hands to use generative AI to translate or simplify government communications, potentially resulting in misleading outcomes.
Realistic communications purporting to be from government or directed to government can be created at volume by generative AI applications, potentially overloading the administration in responding.
Deepfake attacks depicting a political leader or executive are a real possibility – the most startling is a recent demonstration during opening remarks at a US Senate hearing on AI. Democrat senator Richard Blumenthal played a deepfake recording of his own voice that was written by ChatGPT and vocalised by AI cloning software.
Structured approach is required
Since generative AI is not engineered to give a specific result, traditional governance and assurance practices cannot be applied in the same way.
A structured approach toward understanding, using and implementing responsible AI practices is essential for government organisations leveraging the technology.
At the beginning of June, the Australian Government released ‘Safe and responsible AI in Australia’, recognising that it needs to “boost its practices to support responsible AI in recognition of public expectations that governments model best practice and lead by example.”
While considerable work has already been done on developing AI policy at a federal and state level, very little has actually been mandated for government departments and agencies to direct their use of the technology.
At the federal level, the Commonwealth Ombudsman has published its ‘Better Practice Guide for Automated Decision-Making,' which is advisory at best. While government agencies must comply with existing administrative law principles, privacy requirements and human rights obligations (which are expected for any system, not just AI), the additional checklist provided isn’t mandatory.
The Victorian Government has followed a similar path, requiring public sector organisations using AI systems to comply with their pre-existing privacy and data protection obligations. Currently, the only state government to mandate AI development controls is NSW, requiring its AI Assurance Framework be used for all AI-related projects, including the use of ChatGPT.
Ultimately, there are significant benefits to be gained from government use of generative AI – but before widespread adoption can occur, urgent government action is required on setting mandatory requirements, policies and frameworks around their use.
Dean Lacheca is a VP analyst at Gartner focused on supporting public sector CIOs and technology leaders around the transition to digital government.