State of Data & AI: Security and Privacy

proudly sponsored by
AIRIA

If there was one use case that quickly established the value of generative AI, it was its adoption by cybercriminals.

Not content with simply improving the grammar and spelling in phishing emails, AI has become the basis for industrial level deepfake scam production that Deloitte has estimated to have cost US$12.3 billion ($18.7 billion) globally in 2023, rising to US$27 billion ($41.0 billion) by 2027.

Not surprisingly, many of those same capabilities are also being adopted by organisations that are seeking to protect AI investments, with various AI technologies now deeply embedded into cyber security toolsets.

However, the rapid adoption of AI has also brought its own discreet challenges in terms of ensuring that AI projects are both safe and secure, and not all of them are technical.

Unique challenges

One of the defining traits of AI projects is that they require large volumes of data for training and analysis purposes, which can lead organisation to create data repositories that are attractive targets for cyber criminals.

Another risk is the potential for sensitive data to ‘leak’ through inappropriate handling or training protocols that lead to the sharing of data across platforms.

Issues can also arise from the quality of the training data, especially when that data leads to bias or errors in the output of the AI system. This challenge is exacerbated by the ‘black box’ nature of AI systems, which can make it impossible to determine how the system arrived at a decision.

These are only the tip of the iceberg in terms of the security and privacy challenges AI presents, and while none are insurmountable, each requires time and attention to ensure that the positive outcomes of AI investments outweigh the potential downsides.

They also require significant investment in protective systems and processes, with McKinsey & Company estimating the securing of AI projects has become a stand-alone cyber-market segment in its own right, valued at  US$122 million ($185.4 million) today and is poised to grow from to US$255 million ($387.5 million) by 2027, as part of a total addressable market estimated to be worth between US$10 billion and US$15 billion.

Human considerations

While the technical requirements for securing AI systems are significant, many organisations are finding it pays to take a humanistic approach to the challenge.

At the international advertising agency TBWA, chief AI and innovation officer for Australia Lucio Ribeiro said both ethics and security had been front of mind for management from the earliest stages of their AI journey. The company has established a Collective AI framework, which provides structure across governance, risk, and transparency for AI investments and has proved vital when assessing the suitability and security of AI investments.

“We’ve turned down plenty of tools that don’t meet our standards - no matter how good they look on social media,” said Ribeiro.

“A lot of what people publish online - GenAI videos and advertisements, workflow automations, image generators, or AI-generated case studies - may look impressive, but many of those tools can't be responsibly used in an enterprise setting.

“They often lack clear IP rights, data protections, or commercial terms. We simply can’t afford to experiment irresponsibly. Our clients—and we—can’t afford to be AI cowboys.”

Ribeiro said that TBWA’s quest to embrace AI safely had seen the business take steps to ensure that this requirement did not impede its to innovate using AI. This had led to the creation of ‘safe-to-fail’ environments with clear boundaries that supported AI-based experimentation.

“If trust is compromised, creativity is too,” Ribeiro said.

“So our principle remains: build fast, test safely, scale only what’s secure.”

Secure AI foundations

Security has been embedded as a fundamental pillar in the transformation program being undertaken at the Australian National University, where the adoption of AI technology has led to the introduction of additional security dimensions.

According to the university’s director of digital infrastructure and information security, Sajid Hassan, the concentration of computational power for AI workloads has required ANU to develop secure compute environments with isolated processing capabilities for sensitive research.

 “Compliance with evolving AI regulations and guidelines has become a moving target that requires constant attention,” Hasan said.

“We've had to carefully balance the openness required for research collaboration with protection of intellectual property, particularly as AI models themselves become valuable research outputs.

“These considerations have led us to develop specific governance frameworks for AI research that go beyond traditional IT security measures.”

Investments in AI have also spurred an evolution in the university’s ethical frameworks, leading to the creation of comprehensive guidelines for responsible AI use in research that address issues from bias in algorithms to transparency in AI-driven decision-making.

“We've worked to ensure alignment between AI innovation and university values, including commitments to equity, accessibility, and research integrity,” Hassan said.

“This has involved extensive consultation with researchers, ethicists, and the broader university community to develop frameworks that enable innovation while maintaining ethical standards.”

Data governance frameworks are being enhanced to address privacy concerns specific to AI applications, particularly regarding the use of personal data in research. The university has also implemented transparency requirements for AI-driven decisions that affect students or staff to ensure there is always human oversight and the ability to understand and challenge automated decisions.

“New review processes have been established for AI research with potential ethical implications, working closely with our existing human research ethics committees.” - Sajid Hassan, director of digital infrastructure and information security, Australian National University

In AI we trust

A focus on human considerations has been a defining trait of AI-driven security and privacy investments at NAB, where chief data and analytics officer Christian Nelissen described data security, privacy, and ethical use of AI as foundational to its strategy.

“In banking we sell trust,” Nelissen said. “If we lose trust, customers go elsewhere.”

In 2019 NAB developed its Data Ethics Principles and Data Ethics Framework, and in 2023 it engaged with the Australian Human Rights Commission (AHRC) on the development of an assessment tool for AI-informed decision-making systems in banking.

“We recognised that to maintain customer trust, we needed to develop a tool that could assist in looking at any human rights risks,” Nelissen said.

“This work was in addition to the development of our Data Ethics Principles and Data Ethics Framework.”

From a technical perspective, Nelissen said the bank had taken a multicloud approach for its Data Intelligence Platform, with observability, access controls, and operational safeguards embedded consistently across environments.

“This ensures that data is protected, models are monitored, and any anomalies can be quickly addressed.,” Nelissen said.

“As we continue to scale AI across NAB, we remain focused on using it responsibly - guided by strong governance and ethics frameworks to ensure we deliver positive customer outcomes that maintain their trust.”

Browse by Chapters

Click on the tiles below to see how Data & AI is transforming Australian business. 

State of Data & AI Champions

We are proud to present this year's State of Data & AI champions, and showcase the work they do.

Brennan
AIRIA
NEXTDC

Log In

  |  Forgot your password?