Last year saw generative AI take over the technology landscape, leaving no industry untouched by its influence.
As organisations kicked off various pilots and experiments to integrate the emerging technology, one year on, three experts provided insights on the year that was and what lies in store for 2024.
Arun Chandrasekaran, VP analyst for generative AI, cloud and innovation at Gartner, told Digital Nation the technology shows no signs of deceleration.
“I don't see any signs of slowing down or dying down. I think the way I would frame it is 2023 was the year of planning and trying to figure out what these technologies mean for most consumers and enterprise customers,” Chandrasekaran said.
He added 2024 is likely going to be the year of execution, stating “2023 was a year of intense innovation”.
Chandrasekaran said that in 2024, multiple industries will feel the impact of generative AI.
“There are several industries where the degree of impact will be significant, for example, gaming or media and entertainment.
“The other industry where the impact is going to be significant is technology product and services companies.
Chandrasekaran explained since technology product and services companies can now build products faster and also deliver services less expensively, this means “that you could now see the rise of new product companies or new services companies that are building capabilities on top of AI.”
“The industries where the impact of AI would be slightly much slower are kind of more regulated industries. For example, like healthcare or even the public sector.”
This year will see the likes of generative AI advance the capabilities in the form of multimodal technology, Chandrasekaran said.
“The first trend that we are seeing is AI models becoming more multimodal. For example, when ChatGPT came, it was primarily a text-based chatbot and then it started becoming good with code generation.
“Now, if you're a ChatGPT Plus customer or if you're using one of their business products, you could generate images using that.
Chandrasekaran said the second trend to watch over the year is the slimming down of models.
He added Garter has noted that “regulations are going to emerge more strongly in 2024.”
“The final trend I would argue would be more autonomous agents and more action-oriented AI.
“Imagine a scenario where you're able to give it a higher-level intent. I truly believe that autonomous AI and more agent or action-oriented AI is a definite possibility,” Chandrasekaran said.
As generative AI becomes further embedded into organisations, the risks will also need to be tackled, he said.
"More and more application vendors will start embedding generative AI as a feature or functionality into their workflow and business applications.
“I also believe that organisations will start thinking about risk more holistically, because today when they think about risk of AI and generative AI in particular, they're mostly concerned about data privacy risk or hallucination risks.
“But beyond that, we should also think about bias and toxicity and harmful information that these systems can generate.”
Chandrasekaran said that given the advancements in AI, the ability to create deep fakes and misinformation at scale is enormous.
“This is a humongous problem to solve because if you can't figure this out, it's going to erode some of the trust that exists in AI.”
Still asking the same questions
Nikita Atkins, artificial intelligence lead at NCS Group, said one year on, people are still grappling with the best practice for generative AI implementation.
“What's happened...over the last 12 months is, people still asking the same questions,” Atkins said.
Atkins said the next 12 months will see the technology shift from out of the proof-of-concept stage and into deployment.
“We're going to move to see a lot more examples of people implementing them. I think we'll also start to see even more negative sides of things.
“The other interesting thing that I think will happen over the next 12 months as well is that, just like social media, the AI companies are going to have to put in implementation things in place to help manage and police misinformation and deep fakes.”
Much like Chandrasekaran, Atkins also pointed to the rise of multimodal generative AI.
“What will end up happening is that the natural evolution of generative AI is that you will have an AI agent that controls multiple models.
“So, you will ask one person or one agent a question and it understands enough that it goes, okay, for me to answer that I need to go and get some text on a large language model like ChatGPT, I need to produce some images. I might need to create some slides.
“I'll ask the models everything it needs it needs to ask, then take its outputs and then I'll combine that all together into a single output,” Atkins said.
According to Atkins, 2024 will also see the private and public sectors face challenges from generative AI usage.
“There's so many obstacles for generated by the next 12 months that's all around copyright, intellectual property,” he said.
Atkins said “The other one is around transparency” and there will be an interesting transition” where questions revolve around AI trustworthiness.
“New testing framework, new accuracy frameworks around large language models and generative AI will be needed and the flip side of that is combating misinformation and deep fakes,” Atkins said.
He said the next year might see Australian government disclaimer statements that make clear where content is created with AI.
Reaching full potential
Huw Kwon, who leads Accenture’s Centre of Advanced AI in Australia told Digital Nation that 2023 was the year many organisations began to see the technology's full potential.
“Before 2023, whenever the topic AI came up, not many people understood it. It was primarily because most of it was numbers.
“But the difference with ChatGPT or LLM, large language models, is the language and OpenAI created that curtain-ripping moment of where this AI product talked back.”
With telcos, retailers and financial institutions already ahead with AI work, 2024 will see these industries plus many more look to set strong foundations and frameworks.
“This year I see most clients will be on this journey. However, there are requirements that we shouldn't forget.
“We should also think about why are finance institutes, retailers or telecoms faster than others? It's not just because they had an earlier start but because the nature of their business was data,” Kwon said.
He said if organisations don’t have core foundational requirements of AI, future work is “not going to scale”.
“This is a universal challenge that we will face for most of the clients who missed or who rushed too fast without considering the data foundation, platform foundation and the cloud foundation that you need,” Kwon said.
“On top of all this, the responsible use of AI is absolute. It has to be pervasive inside out. There cannot be any exception. Because one data breach is a disaster”.
Kwon says, that this year businesses should tackle generative AI through a measured approach.
“It's always a matter of balancing and especially a technology so young, we should be brave and within the right framework, we should maximize an experience within your business context now, rather than later,
“Because you cannot buy the experience and contextualize, um, LLM usage in your company one or two years down the line.
Following earlier sentiments, Kwon also said multimodal technology is also set to emerge in 2024.
“That generative AI, it’s not just LLM, it's expanding beyond text generation, video, voice, [and] gaining traction and voice, image and video generation is also an area to look out for.
“But also not all those individually, but also in a multimodal LLM, this can be all combined text and video.”