When generative AI sprang into prominence in late 2022, it ushered in not just one, but a plethora of new tools, all vying for attention.
While OpenAI’s ChatGPT may have won the initial battle for mindshare, it wasn’t long before tools such as Google’s Gemini, Meta’s LLaMa, Anthropic’s Claude, and others began finding favour.
For would-be users, this sudden emergence of a range of newly minted tools – each with their own specific capabilities – presented additional complications when weighing up what to use as the basis for AI projects.
According to Gartner senior director analyst Tony Zhang, there are numerous criteria by which organisations should assess an AI model.
“Number one will be the capability from those models,” Zhang said.
“The second thing they need to really consider is the cost.”
This first question relates to specific capabilities, such as advanced reasoning capabilities or greater nuance in specific tasks such as coding or summarisation. Some models are also stronger in their abilities to handle different media formats or provide knowledge within specific professional domains.
This second question can be especially important, as costs can vary dramatically based on use cases. While the costs for AI API tokens can seem low at less than US$1 ($1.5) per million tokens for basic or smaller models (with each token consisting of approximately four characters), not all tokens are equal, and they can be used up incredibly quickly if their usage is not carefully managed.
While enterprise licences for models have become common, these also require a degree of sophistication to ensure that organisations scope their requirements appropriately and do not overspend in comparison to what they really need.
Beyond capability and cost, another consideration when choosing a model is whether to run a commercial or open-source model. While the latter option provides more flexibility in terms of where the model can be run, including on private clouds or on-premises infrastructure, it brings associated complications in set up, maintenance and fine tuning, but may also be preferable for organisations seeking a higher degree of security or privacy than commercial cloud-based solutions.
Finally, buyers must consider how their chosen model will interact with their existing application suite and specifically ask whether the native integrations offered by some models offers a compelling advantage.
These choices have been on the mind of Glenn Mason, head of technology, AI, and data at the childhood cancer family support charity Redkite.
“We've been deliberately evaluating AI platforms to ensure they are secure, ethical, and truly integrate with our technology stack,” Mason said.
“It's been about asking tough questions early. Can this platform scale, does it align with values, and will it improve outcome for those families battling childhood cancer who we support?”
Redkite’s AI program is linked to three goals: managing external interaction; driving analysis to understand trends in client needs and impact; and piloting workplace AI to give Redkite’s team more time for meaningful work by taking mundane tasks off their plates.
For Mason, the overriding consideration has been to lay the appropriate groundwork to achieve these goals, rather than to chase the hype.
“ROI at a not-for-profit isn't just about cost savings,” Mason said.
“We've built a benefits framework that measures financial impact alongside improvements in client experience and team wellbeing. That broader view helps us invest where AI genuinely makes a difference.” - Glenn Mason, head of technology, AI, and data, Redkite
One group that has thrown itself enthusiastically into exploring the power of LLMs has been Australia’s banks, who represent prime candidates for AI usage due to the massive amounts of data they hold and their ongoing desire to provide improved services with greater efficiency.
At NAB, chief data and analytics officer Christian Nelissen said the bank’s intention was to make data ‘like electricity’ in its ability to power simple, safe, and more personalised banking experiences for customers.
He said for the past three years NAB had been building a powerful and modern data platform that used data to deliver more relevant, personalised interactions, and harnessing generative AI was another way the bank could simplify process so bankers could spend more time with customers.
Before it could achieve that goal however, it had some cleaning up to do.
“When I joined NAB, our environment was very complex,” Nelissen said.
“We had two legacy data warehouses and an existing data lake. One of our biggest shifts was moving away from these legacy platforms such as Teradata, which was a 26-year-old platform.”
A key foundation of NAB’s data and AI strategy was an earlier decisions to take a cloud-first approach to new systems adoption. Today more than 85 percent of NAB’s applications are on cloud.
“By consolidating and modernising our data infrastructure into a cloud-native architecture, we’ve unlocked greater agility, scalability, and real-time decision making,” Nelissen said.
“This has improved resilience, streamlined data access, reduced latency, and enabled us to respond to customer needs with speed and precision.”
Another key decision had been the adoption of Databricks as NAB’s strategic data platform, which was dubbed ‘Ada’ in honour of pioneering female computer programmer, Ada Lovelace. Nelissen said this AWS-based cloud-native data lakehouse had become NAB’s single platform for all data, tooling, and consumption needs.
“In three years, we’ve quickly gone from establishing foundations and building pipelines to a data platform that now manages all AI and BI data workloads, powers business critical use cases and is delivering at massive scale and in near real-time,” Nelissen said.
Today Ada contains 1.1 petabytes of live data, processes 1,000,000 queries a month, and is the foundation for NAB’s so-called ‘Customer Brain’ and GenAI platform.
“Our Customer Brain utilises AI and machine learning across 1200 adaptive models, leading to more than 50 million customer interactions every month,” Nelissen said.
“In three years, the Brain now has 220 different ‘actions’ it can prompt a customer about, depending on their situation and needs. These actions can be service related, engagement related or sales and product related. Around 60 per cent of actions are service and engagement focused because being helpful builds trust and trust builds loyalty.
“This is delivering tailored, timely and relevant experiences at significant scale for our 10 million customers.”
Nelissen said the cloud-first strategy had also been important for enabling NAB to quickly adopt new AI platforms as they emerged.
“It’s a huge thing for us as it is for everybody,” Nelissen said.
“Every time you think you’ve got a handle on this, something new emerges with it and it kicks-up another gear. Given the significant maturity and scale of our Data Platform, we have been able to quickly and safely build and scale our GenAI platform on that foundation. It has strict data and AI guardrails, supported by our Data Ethics Framework, to help ideas go from test-and-learn to scale and delivering value.”
We are proud to present this year's State of Data & AI champions, and showcase the work they do.