Current ethical frameworks within organisations are “meaningless” as they don't properly connect to the data, according to Dr Ian Oppermann, chief data scientist at the NSW government.

Speaking to Digital Nation, Oppermann said these ethical frameworks are a “pointless waste of time” as you cannot connect the principles to the risk frameworks and algorithms.
“People who spend time digging into ethical principles are largely wasting their time because they haven't empowered the people to work with mechanisms to connect to those principles," he explained.
Ethical frameworks add a layer of complexity, for example, if you’re writing a list of ethics for office stationery it is straightforward, and people understand what not to do with a stapler. he explained.
However, when leaders start discussing various points of data, that is when confusion arises.
“What does this mean for someone sitting in front of a keyboard? Can I mix data set A with B, not with, not with C? Can I Use this data for this algorithm in a way that is appropriate?” He said.
Oppermann said this means that standards, frameworks and existing tool sets are not much more than a way of making people feel good.
“They are useless from the perspective of ensuring that AI is not enacted or embodied in a harmful way,” he added.
While he finds ethical frameworks “meaningless” he said developing a set of principles in a workplace is a “good thing to do”.
“They reflect what you're supposed to be doing as an organisation…But behind those technology-specific issues, you say, what are we trying to do? We try not to be unfair. We're trying not to act in a dangerous way. We're trying to protect safety,” Oppermann said.
“Principles are great, most organisations already have the principles sorted, if they don't, they should.”
AI is a different type of system
When an organisation implements AI, Oppermann explained they need to be wary that the technology is nothing like they have used before.
“AI is different to a calculator, it's different to a computer, it's different to a stapler and a hole punch. Because it amplifies, it can take an existing system and amplifies it,” he said.
“It accelerates, all digital systems can accelerate, but AI uniquely can adapt. The way you design it isn't necessarily the way it operates. That was all true pre-November, 2022 and now it generates, it synthesises and it translates.”
Oppermann said when thinking about the ethics of AI, principles of ethics are true irrespective of what technology a leader is talking about.
“Do you understand that if your algorithm was trained on skin tones of people that don't include indigenous people, it is an issue if you then try to apply it to indigenous people?” He explained.
“It is about understanding what's different about the technology, testing your principles against that and then if you want the ethics of AI, it becomes filling in the gaps between any technology and what AI is particularly unique at.”
Discrimination in ethical frameworks
Current ethical frameworks are used to limit harm, Oppermann said. What ethical frameworks also contain is some form of discrimination.
When implementing certain ethical frameworks, Oppermann explained that it is all about ensuring the appropriate discriminations are in place.
“Whether you're doing that using pencil and paper, using a data-driven system or using AI, your understanding of what you are doing is quite important,” he said.
When Oppermann implemented the AI strategy and AI ethics policy at the NSW government, he said they had to understand what made data-driven AI different from the current internal processing system.
“Can we get those pieces clearly understood and managed so we don't make statements like ‘do no harm’, which are good statements, but ultimately meaningless if you don't know what to do when you're sitting in front of a keyboard or what to do when you are developing your algorithm,” he explained.
“We have tried hard to connect the principles of all ethics into an understanding of what's different about AI and then map that to understand what to do with the data, the algorithm and with what the AI does.”
What should organisations do?
For organisations who want to have ethical principles within their AI systems. Oppermann recommends they implement an assurance framework.
In March 2022, the NSW government unveiled its AI assurance framework which assists agencies to design, build and use AI-enabled products and solutions.
According to the NSW government, this framework is consistent with the NSW government AI ethics principles, the framework is designed to help agencies identify risks that may be associated with their projects.
To comply with this framework, the NSW government has a review committee with professionals based in ethics, data, standards and AI.
Oppermann explained how the committee examines a project’s implementation of AI to decide whether or not it is ethical.
“We're looking at a project and we don't like it, we take the AI out and see whether we still don't like it. We will take the data out and see whether we still don't like it. If we are uncomfortable with what's being done, then it's how AI is being applied, which is the problem,” he explained.
Once the issue has been identified, Oppermann advises what should be put in place in terms of risk mitigations, data quality and algorithms.
“Then we get the whole project back and say these are the nuts and bolts, the hard parts, as well as the appropriate parts, as well as the design thinking parts you need to think about to go forward,” he added.