As the federal government continues to take submissions over the safe and responsible use of AI, Australian business leaders are gearing up for the impending AI regulation that will more than likely impact them.
Dana McKay, senior lecturer at innovative interactive technologies, school of competing technologies at RMIT told Digital Nation how AI regulation could impact organisations.
McKay said at the moment there is the “threat of regulation” hanging over leaders and things are unclear.
But what is most peculiar, she said, is the fact there are many unregulated technologies in Australia but AI is the one all decision-makers are worried about.
“I find it interesting because we've got lots of technologies that have created all sorts of problems that we haven't regulated,” she explained.
“This particular technology seems to have caught the imagination of the public in a way that other technologies haven't. But whatever the reason, regulation is coming.”
There is some certainty around what the regulations might be for businesses, especially for those who are already implementing AI-based solutions, McKay said.
“Xero, for example, is using AI to support small businesses and accounting, but I haven't seen an example of an Australian business using AI in the ways that the generative AI regulations are particularly concerning, it's more than automated decision making,” she said.
McKay explained that the new regulations could impact automated decision-making. These types of decision-making are used in critical areas like medical diagnosis.
“There is some stuff in the new regulations around automated decision-making that will need to be teased out in the future. That could potentially have a chilling effect on some of the development in Australia,” she said.
“That could affect the likes of Xero, for example, does making a recommendation for a small business about what to do next count as automated decision-making or not well, I don't know.”
McKay said the challenges of AI are not necessarily the ones that are constantly seen in the news, like the cost of running AI and discrimination complexities.
“We're living in an interesting time, the big challenges of AI are the ones that people aren't necessarily worrying about the environmental cost of running these models, as they do take a lot of electricity to run,” she said.
From these complexities, McKay said this could be an opportunity to use explainable AI.
“If a decision is made that affects someone, they can say hang on a minute, that doesn't feel true for me, where did that decision come from and challenge the decision if necessary. Those are things that we should be concerned about,” she added.