NAB keeps a close watch on budding Euro, Australian AI rules

By on
NAB keeps a close watch on budding Euro, Australian AI rules

Looks for future direction on ethical use of data and algorithms.

NAB is once again looking to Europe - and specifically its work on ‘trustworthy AI’ regulation - as a benchmark for future standards in the development and use of AI and machine learning systems.

Like others operating in the space, the bank has long held Europe up as the “high-water benchmark” for regulatory settings around approaches to data use, analytics and privacy.

Speaking at an EDM Council virtual conference held in mid-July, NAB’s chief privacy and data ethics officer Stephen Bolinger said prescriptive AI regulations being worked on by the European Commission (EC) were likely to shape AI and machine learning (ML) system development and use, both inside and outside of the bank, in coming years.

Rules are being drawn up to assign different risk ratings to AI technologies and systems, which affect the level of compliance rules and oversight they face to be considered trusted enough to use.

Bolinger noted that the European approach would essentially create “out-of-bounds areas for AI” while forcing all developers and users of systems to apply considerable rigour within their chosen use cases for the technology.

“If you have any contacts with Europe, even tangentially, you should be reading the proposed AI regulation from the European Commission - that is a pretty shocking document when you go through it, if you consider the level of prescription that is included in it,” Bolinger said.

“I think anyone who's distributing, using, building a component of AI, would be touched by it. 

“It's still in draft so I expect that there's going to be significant changes to that as it goes down the regulatory path, but there's no indication that that's going to go away.”

Bolinger noted that the “heavy safety approach” to AI use bore similarities to approaches used to certify the safety of medical devices, a sector he has previously worked in.

“In medical devices, the safety group is basically core to the business because if you don't get approval from your regulator that your product is safe for use, you're out of business,” Bolinger said.

“[The EC AI proposal] very much takes that very strong, heavily regulated approach to AI and it has quite broad extraterritoriality built into it. 

“So that's why I said anyone who's just tangentially looking the wrong way in Europe's direction … is going to be caught by this at some point down the road.”

Bolinger contrasted the European approach with the more principles-based approach of the Australian Human Rights Commission, which also made recommendations around the ethical use of AI this year.

“It is a much less prescriptive approach that is essentially about building up knowledge and empowering existing regulatory functions to address AI,” he said.

While neither effort would result in “imminent” regulation of AI systems and use cases, Bolinger noted that they are likely to continue to be refined, and are unlikely to be dropped.

“Nothing's going to happen this year on them, and probably not next year from an enforceability standpoint,” he said.

“But it is certainly setting the direction.”

That direction is likely to then influence how NAB continues to use data analytics, AI and ML tools in its business.

Bolinger kept his commentary relatively high-level when it came to specific NAB work, though he noted that some AI/ML uses were about “enabling direct commercial opportunities” while others were “enabling [NAB] to identify customers who are vulnerable, and reach out to them and see if they need assistance.”

“There's a pretty broad spectrum of potential uses of AI and machine learning for the bank and other financial services companies,” he said.

“My role is looking after the privacy and data ethics elements of it, so it's really about making sure that when we do that, we're taking into account a broad set of stakeholders that includes our customers, of course, our colleagues, but also broader communities looking at group-based harms. 

“There's an element of not wanting to slow down or inhibit the good things that we want to do with data, but making sure that we do that in a respectful way that's really going to be sustainable long-term for us.

“If we make decisions today that people are disappointed by tomorrow, people are going to stop trusting us with their data, and - as a bank - with their money, and that's not a good business proposition for us.”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © . All rights reserved.

Most Read Articles

Log In

  |  Forgot your password?