Australian business leaders understand the commercial opportunities of AI, but many of them are unsure of how to responsibly navigate the fast-paced environment and meet customer expectations.

To combat this, the CSIRO has launched a new report to help organisations develop responsible AI principles based on the federal government’s eight AI ethics principles.
The report highlighted practices such as impact assessments, data curation, fairness measures, pilot studies and organisational training are some of the “simple but effective” approaches.
Stela Solar, National AI Centre Director at the CSIRO said this report is designed to help businesses build responsible AI approaches in their organisation.
“We hear from businesses, that their ability to innovate with AI is directly correlated with their ability to earn trust from the communities they serve,” she said.
“AI systems that are developed without appropriate checks and balances can have unintended consequences that can significantly damage company reputation and customer loyalty.”
The recent Australian Responsible AI Index found that despite 82 per cent of businesses believing they were practising AI responsibly, less than 24 per cent had actual measures in place to ensure they were aligned with responsible AI practices.
The report Implementing Australia's AI ethics principles: A selection of responsible AI practices and resources was developed by Gradient Institute.
Bill Simpson-Young, CEO of Gradient Institute said he hoped the report would encourage more businesses to start the journey towards responsible AI practices.
"Even though Responsible AI practices, resources and standards will keep evolving at a fast pace, this should not distract organisations from implementing practices that are known to be effective today,” he said.
“For example, when an AI system is engaging with people, informing users of an AI’s operation builds trust and empowers them to make informed decisions. Transparency for impacted individuals could be as simple as informing the user when they are interacting with an AI system.”
While it is broadly accepted that fairness is important, what constitutes fair outcomes or fair treatment is open to interpretation and highly contextual, Simpson-Young said.
“What constitutes a fair outcome can depend on the harms and benefits of the system and how impactful they are,” he said.
“It is the role of the system owner to consult relevant affected parties, domain and legal experts and system stakeholders to determine how to contextualise fairness to their specific AI use case. The report helps organisations address these challenges."