Quality generative AI rewards may only materialise when companies dive deeper into organisational structure and rewire how they work, according to the latest McKinsey Quarterly.

In its March quarterly [pdf], strategy and management consulting firm stated 2024 is “shaping up to be the year for gen AI to prove its value” however competitive advantage comes from “rewiring the business for distributed digital and AI innovation”.
“It’s time for a generative AI reset,” the report stated.
“The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realise that capturing generative AI’s enormous potential value is harder than expected.”
The report added, “Companies looking to score early wins with generative AI should move quickly.”
“But those hoping that generative AI offers a shortcut past the tough—and necessary—organisational surgery are likely to meet with disappointing results.
“Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work gets done,” it stated.
The report recommended businesses “figure out where generative AI copilots can give you a real competitive advantage”
“To ensure that all parts of the business can scale generative AI capabilities, centralising competencies is a natural first move.
“The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs.
“The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources,” the report said.
In the report it was also recommended upskilling the current pool of talent played a part but also to be “clear about the gen-AI specific skills you need”.
“By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop”.
However, this can take a few months to reach “a decent level of competence”.
“While current upskilling is largely based on a ‘learn on the job’ approach, we see a rapid market emerging for people who have learned these skills over the past year.”
This is moving quickly with the report stating GitHub noting developers were working on generative AI projects in large numbers and that 65,000 public generative AI projects were created on its platform in 2023.
The report also said companies should also look to “form a centralised team to establish standards that enable responsible scaling”.
“To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move.
“The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimising risk and containing costs.
“The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources,” the report said.
Businesses should also “set up the technology architecture to scale”.
“Building a generative AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely.
“We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months.
The report said for this reason, “the actual model costs may be less than 10 to 15 percent of the total costs of the solution.”
It also stated that “building for scale doesn’t mean building a new technology architecture” but rather “focusing on a few core decisions that simplify and speed up processes without breaking the bank.”
“Ensure data quality and focus on unstructured data to fuel your models,” was another key point the report highlighted.
It found that “the ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its data.
“As with technology, targeted upgrades to existing data architecture are needed to maximize the future strategic benefits of generative AI.”
Its last point highlighted building “trust and reusability to drive adoption and scale” as “people have concerns about generative AI.
Given the list of questions, “the bar on explaining how these tools work is much higher than for most solutions.”
“People who use the tools want to know how they work, not just what they do. It’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers,” the findings said.