The NSW government has updated its AI risk assessment framework, requiring departments and agencies to adhere to a more rigid decision making process based on "clearer guidance" from the state.
The new assessment framework "replaces lengthy, subjective self-assessments with a faster, standards-aligned approach", the NSW Office for AI said in a statement.
The approach aligns the state’s practices with other national and international bodies.
It could also fast-track approvals for low-risk deployments, by providing departments and agencies with an assessment process that "automatically" determines appropriate oversight levels according to risk.
NSW Minister for Customer Service and Digital Government Jihad Dib said that the framework would be essential as AI use became “business as usual” across state departments and agencies.
“AI can transform government services, but we have a responsibility to use it safely," Dib said.
“By aligning our system with national and international standards, we’re building a trustworthy digital government for the future."
To support the release of the new framework, the state has released a tool in the form of an Excel spreadsheet model designed to help departments and agencies conduct assessments.
The NSW government said that the new tool, co-designed with CSIRO’s Data61, would help departments rapidly categorise AI deployments into risk categories that determine the level of safeguarding required.
“It reduces assessment time from days to less than 30 minutes with low-risk systems able to move through the process quickly, while higher-risk or critical systems are automatically identified for review,” the NSW Office for AI said.
Low-risk systems can progress quickly through the assessment process, whilst higher-risk applications trigger mandatory reviews and safeguards.
By contrast, public-facing AI chatbots that collect sensitive personal information or influence advice are automatically classified as high-risk under the new system.
Such applications trigger mandatory privacy impact assessments, cyber security reviews, legal scrutiny and accessibility requirements before deployment. They are also submitted to the AI Review Committee for independent oversight.
The office said that the framework would increase safety by identifying factors that could elevate the need for tighter controls, including “bias testing, accessibility checks and human-rights screening, meaning appropriate safeguards can be in place before deployment”.
iTnews understands that existing AI deployments won't be required to be re-assessed under the new framework provided their scope, risk and features haven't changed greatly since their prior evaluation.

iTnews Executive Retreat - Security Leaders Edition



