The European parliament has become the first in the world to create a legal framework on artificial intelligence aimed at addressing the risks of AI leading many to wonder when, or if, Australia will follow suit.

The proposed law allocates AI applications under three risk categories: unacceptable risk, high risk and “applications not explicitly banned or listed as high-risk are largely left unregulated.”
The rules apply to any business using or providing AI-enabled capabilities in Europe meaning any Australian organisation wishing to do business within the 27 member states will need to comply.
While Australia already holds some existing laws, the Australian Securities and Investments Commission (ASIC) has acknowledged these don’t effectively prevent AI-facilitated harms with more work is needed.
Will the EU pass the first comprehensive law on AI by a major regulator anywhere, experts told Digital Nation what this could mean for Australia.
Nader Henein, VP analyst at Gartner said the EU AI Act, “Australia hasn’t been at the forefront of data regulation. In the medium term, we expect at most an ‘optional’ code of conduct, following a similar approach to the US”.
“Organisations will have to catalogue and risk-assess their AI-enabled capabilities into one of the four risk tiers defined by the AI Act.
“Where things get challenging is the fact that they are not merely responsible for the AI capabilities they build, but also those they buy or more accurately ‘have bought’.
“Going back and conducting the cataloguing and risk-assessment exercises with vendors and service providers will be challenging until those third parties have sufficient transparency for their clients with ongoing insight into new AI-enabled capabilities,” Henein said.
Henein said if organisations adapted well to the introduction of the EU’s General Data Protection Regulation (GDPR) “then they will be able to short-circuit and substantially reduce the effort needed to catalogue AI-enabled capabilities.”
“Otherwise, it will be a labour-intensive and lengthy exercise”.
Non-compliance could lead to hefty fines with “the size of the fine will push boards to mandate a very detailed and very time-consuming assessment of their use of AI.”
“Organisations need to be aware of the staggered enforcement timeline of the AI act,” Henein said.
“Unlike the GDPR, the AI Act comes into effect in stages. In six months, around November, the rules associated with prohibited AI systems come into effect.
“Six months later at the 12-month mark, the rules associated with general purpose AI and penalties come into effect.
Henein said at the “two-year mark, a year later, most of the remaining rules come into effect". Other rules come into effect at the three-year mark.
Forrester’s principal analyst, Enza Iannopollo said, “Adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated.
“The EU AI Act is the world’s first and only set of binding requirements to mitigate AI risks”.
Iannopollo said “Like it or not” the regulation allows the EU to set up “a de facto standard for trustworthy AI, AI risk mitigation, and responsible AI”.
“Every other region can only play catch-up.”
“The fact that the EU brought this vote forward by a month also demonstrates that they recognise that the technology and its adoption is moving so fast that there is no time to waste, especially when there isn't an alternative framework available,” Iannopollo said.
With some act’s requirements ready to be enforced later this year, businesses can prep for the new AI regulations by assembling “their ‘AI compliance team’ to get started.”
“Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite,” Iannopollo said.
Neil Thacker, CISO for EMEA at Netskope said, “With the growing presence of AI in all aspects of daily life, the question of legal frameworks has become urgent and necessary to regulate its uses and protect data.”
“However, it is essential that this is done with precise and transparent legal precepts that evolve with the technologies so that we strike the right balance of enabling innovation while respecting ethical principles,” Thacker said.
Thacker added, “Informed decision-making is crucial to implementing AI that is ethical and meets the requirements of the new law.”
“Knowing and documenting the use of both machine learning and AI systems within an organisation is a simple way to understand and anticipate vulnerabilities to business-critical data while ensuring responsible use of all," Thacker said.