A year ago, the term generative AI was fairly unknown to organisational leaders outside of the data and analytics space, nowadays, it’s ubiquitous with most companies now trying to find ways to embed the technology into their workflows.
In Australia, people are showing strong interest in the technology with one in five Australians aged 16 and older already aware of ChatGPT, and one million Australians were already using it according to Telsyte.
Those statistics were recorded in mid-January 2023, a month and a half since the release of ChatGPT.
However, with this boom in popularity comes concern from leaders and experts on how generative AI and its apps are impacting ethical frameworks.
Foad Fadaghi, principal analyst and managing director at Telsyte said, “Generative AI has the potential to transform many industries and sectors, but it also poses some ethical and social implications that need to be carefully considered and addressed.”
Brian Burke, research vice president at Gartner said generative AI has added some new twists to the discussion around AI and ethics.
“AI and ethics typically related various kinds of model risks that were inherent in the models that we were building. What [generative AI] has extended to is a lot of misuse risks and usage risks,” he explained.
Burke said generative AI has added more complexities to an environment that was already very complex.
Because of these complexities, Burke said organisations that have implemented AI or are thinking about implementing AI should already have ethical frameworks embedded within the business.
“Issues around privacy, security and explainability, all of those ethical issues should have been already addressed to some extent by organisations and reinforcing that with generative AI becomes an even greater concern,” he explained.
He said generative AI has exacerbated ethical issues within the AI space.
“Specifically with ethics, there's a lot of potential harm that can also come from generative AI and we have to be conscious and aware of what those potential harms are and learn as we go to mitigate those risks,” he said.
“So we can present outcomes that are fair for all the people that are using generative AI or are impacted by generative AI.”
What’s changed?
While large language models (LLM) like ChatGPT and other generative AI apps have caused a stir in creating organisational efficiencies, not much has changed in the way of ethics.
Dr Alex Antic, co-founder at data and analytics consultancy Two Twigs said the biggest change is the scale of use within businesses.
He said, “Fundamentally, there's not much that's different from an ethical perspective with this tool and many others that wouldn't be captured under either ethics or privacy and legal framework.”
Nevertheless, generative AI has reinforced the need for these ethical frameworks to exist, Antic explained.
“It's added urgency to them, given many issues that are bound and that we've seen across the media,” he said.
“Ultimately, the same issues exist as with most similar AI technology, especially around perpetuating and amplifying bias, discrimination and misinformation at scale.”
He added, “A key part of that is that technology moves faster than regulation and law. A real need for ethical frameworks to fill this gap and to deal with those things that don't easily fall under legal policies and frameworks.”
What the frameworks need to resolve
If organisations are to implement ethical frameworks, they need to help resolve seven broad risk categories, according to Antic.
The first is around data usage, “Including potential data privacy violations that we've heard and there has been a lot of discussion in media around these that are intentional and unintentional.
“The second is around responsibility and accountability for using and disseminating information from these technologies, who's ultimately responsible for that and held accountable when things go wrong.”
The third is around transparency, explainability and fairness.
He said, “Are organisations transparent enough in how they use this technology? Are the organisations that create this technology transparent enough in terms of what data are the models trained on?
“How is the information they capture through the use of these technologies being used? Is it all done in a fair way? There's a lot of risks around that.”
The next point is around ownership and control, and in particular copyright IP infringement and confidentiality breaches.
“This is something that's become more at the forefront of the use of this technology, and we're seeing a lot more of these issues discussed in media,” Antic said.
The fifth, is around accuracy and truth in terms of the information that's disseminated and generated by such AI tools. Following that are discrimination bias and misuse and abuse.
“Discrimination bias, it's a huge one as in most AI solutions that are deployed at scale. The last one is misuse and abuse, things like deepfakes and other nefarious use of such technologies,” he said.
“They are the broad categories that I see that's had a big impact on how generative AI is used and how the ethical frameworks need to be constructed in a way to help mitigate all these risks.”
Everyone’s responsibility
While generative AI-based apps have created ethical concerns, it is up to both leaders and employees to understand how to use them responsibly.
Matthew Newman, CEO of digital ethics consultancy TechInnocens said there is a need to empower the entire organisation to make the right decision.
“Rather than having one or two people, we need to be able to educate the whole organisation on appropriate use, be able to help them understand what the technology is and when it's right and wrong to use it,” he said.
Currently, when dealing with AI and ethics, organisations would have a specific task force or department to make ethical decisions but Newman said that needs to change.
“We don't have that for internet privacy, if you publish something on the internet there are controls. But there are also expectations that you as an individual at the company, make the right decisions,” he said.
“You don't start posting company results on Facebook when you've been educated and empowered to make those decisions. That is exactly what we need to do with our organisations.
“These aren't decisions that are going to be made in some ivory tower by a group of elite people, these need to be made by individuals.”
Newman said the technology is too pervasive and will soon enough be everywhere on desktops, phones and other applications.
“We need to educate and empower people to be able to make the right decisions and have the right information to make those decisions,” he added.
Burke said additional research and understanding are required in this area of generative AI and the potential implications of using it.
“It is not something that's completely new, we've been dealing with deep fakes and deep fake detection for some time. We need to have more awareness about what these models are capable of and less trust around these models.”
Burke explained that one of the challenges of models like ChatGPT is that they “hallucinate” so they can create text that is misinformation.
“They do that very convincingly. We need better awareness about what these models are capable of, both in terms of benefits but also in terms of risks to understand what the downstream implications are of generative AI," he added.
A silver lining
The surge of generative AI and concern around ethics has brought some silver linings to the issue. Newman said this popularity has brought a degree of pragmatism to responsible use.
He said, “The roots of AI ethics can be fairly academic. Quite a lot of the discussions are cerebral and thought experiments about ethics that maybe didn’t resonate with a lot of the community. Our business leaders want straight answers and solutions to problems.
“This massive growth in generative AI arriving on the scene has forced the AI ethics community to be far more pragmatic and give very straightforward answers,” he said.
He gives examples of some of the questions leaders should be asking themselves about generative AI use.
“What can we do now? What should we do in the next five days? What can we do in the next two months rather than setting up large frameworks or new boards? What I tell my staff, what should I advise them? Should I tell them not to use it? Practical, pragmatic questions.”
He added, “That is good for the discipline. It sharpens us up, it makes us give answers rather than considering the issues and I find that positive.”
Antic at Two Twigs said this surge in popularity has meant an increased awareness in understanding AI ethics.
“It's raising awareness and creating that urgency. More people are aware of the need for ethical AI and have realised the urgency in trying to put something in place, in terms of making sure their organisation is thinking about it is doing something practical rather than looking, sitting back and thinking, this doesn't affect us.
“That's the main thing it has done which is positive for us in society in general,” he ended.