Artificial intelligence (AI) brings both benefits and risks to every organisation, this means business leaders need to understand the ethics AI brings.
Catriona Wallace, director at the Gradient Institute spoke with Digital Nation Australia on how those at the board level need to understand the importance of AI governance in organisations.
She said, “I know Accenture has done a study recently and they saw that 58 percent of CEOs do lie awake at night worrying about the unintended consequences and harms of their AI systems. We know it's top of mind for executives.
“That's why those governance models should be at the top. We also recommend potentially a centre of excellence around responsible AI having been set up as part of the overall governance, and then multi-functional teams have to be involved.”
Wallace said leaders need to be constantly auditing their AI systems.
“Whether that's an assessment or an audit team internally, who are auditing the data, the algorithms, the outputs, testing with customers, different outcomes. Or whether it's an external company, like The Gradient Institute, that I'm a director of,” she said.
Wallace explained there are various tools and mechanisms that the data scientists will use to determine how their AI function will work from an ethical perspective.
“There are a bunch of algorithmic tools, software tools, and processes for that. There is the monitoring and reporting function, so how are you getting constant feedback on the results of your AI,” she said.
To be a responsible company, having a contestability program is imperative, Wallace said.
“If your customers have been discriminated against by your AI, they need to be able to contest it, then you would need to be able to explain what your algorithms did. Because you and I need to be transparent and then the organisation needs to be held accountable to any harm or damage that they've done to their clients or to the public,” she added.