A "former contractor" to the NSW Reconstruction Authority uploaded an Excel spreadsheet with "over 12,000 rows of information" to ChatGPT, exposing "up to 3000" people’s data.

The victims of the breach are applicants to the Northern Rivers Resilient Homes Program, under which the government is offering to either buy back flood-prone homes, contribute to the cost of rebuilding, or to improve resilience such as by elevating them.
The program was set up after the devastating 2022 floods in northern NSW.
The AI-related data breach was disclosed on what is a public holiday Monday in NSW, although it actually occurred over six months ago, between March 12 and 15.
The authority said in a breach notice that the upload “was not authorised”.
Analysts at Cyber Security NSW have since been poring over the Excel file, with the line-by-line analysis so far finding “up to 3000” people’s data.
“Every row is being carefully reviewed to understand what information may have been compromised,” the authority said.
“This process has been complex and time-consuming and we acknowledge that it has taken time to notify people.
“Our focus has been on ensuring we had the right information to contact every impacted person accurately and completely.”
It added: “We expect the forensic analysis to be completed within the coming days. This will give us a clearer understanding of the extent of the breach and the specific data involved.”
Slow data breach disclosure is the bane of mandatory notification schemes.
The authority said that breached data included “names and addresses, email addresses, phone numbers” and “some personal and health information”.
It said there is “no evidence that any of the uploaded data has been accessed by a third party”, but this could be hard to monitor because public AI tools are uncontrolled environments.
The agency indicated it had put in place “safeguards” to prevent a recurrence.
“We’ve reviewed and strengthened our internal systems and processes and issued clear guidance to staff on the use of unauthorised AI platforms, like ChatGPT,” it said.
“Safeguards are now in place to prevent similar incidents in future.”