Over the past two years, most large organisations have proven AI can work. The challenge for 2026 is different: can it work responsibly, economically across the enterprise?
Boards are asking tougher questions. What does each decision cost to serve? How is the system governed? Where, beyond customer service, marketing, and software enginering, will AI actually drive impact? Against that backdrop, four trends will define the year ahead.
Inference costs will bite
As AI moves from training to everyday use, the conversation shifts to inference—the compute and energy needed to run models millions of times a day. This makes AI not just a technical endeavour but an operating expense conversation for CFOs. Global data centre CAPEX could reach the multi-trillion-dollar range by 2030, with a significant share tied to AI infrastructure.
The implication is pragmatic: treat cost-to-serve per AI decision as a first-class metric. Select models purpose-built for the task. Consider where workloads run, because the price of a millisecond includes the price of a kilowatt. Industries operating at real-time scale, and payments, retail media, streaming, and industrial IoT will feel this early. Winners will design for efficiency from day one rather than retrofitting after costs balloon.
Trust gaps threaten scale
Responsible AI now sits at the Peak of Inflated Expectations on Gartner’s 2025 AI Hype Cycle. Attention is high and promises ambitious, yet many programmes remain early in execution.
Payments offers a clear signal of what's at stake. In HCLTech's recent research, 99% of leaders say they already use AI in operations, yet 47% report they don't have AI policies in place. That trust gap slows scaling and invites risk.
Across large enterprises, the response is moving into operations, though maturity is uneven. Committees and decision rights are being established. Bias, robustness and privacy are expressed in business terms non-technical leaders can review. Human oversight is reserved for sensitive decisions. Testing and monitoring are embedded in playbooks so issues surface early.
In regulated sectors like financial services and healthcare, evidence is the basis for scale. Validation records, live monitoring and auditable data lineage are becoming standard—organisations that make this evidence available progress more quickly from pilots to deployment.
Agentic AI won't deliver overnight
Agentic AI, as systems that autonomously plan, act and interact across tools, is drawing strong interest. But Gartner notes that buyers are rethinking vendor approaches as they scale, signalling caution on timelines.
The payments industry reflects this reality. Many leaders are exploring autonomous journeys, yet only 18% say they're fully prepared to deploy secure agent-pay solutions. Ambition is running ahead of foundations.
HCLTech's research finds that companies with product-aligned structures are four times more likely to maximise returns from AI investments. These firms align outcomes, funding and ownership, making value visible across the lifecycle.
In practice, enterprises are starting with contained workflows—customer onboarding checks, billing reconciliation, IT service requests—moving forward in steps as controls mature. ROI arrives over time, confidence grows with evidence, and risk stays manageable because scope is limited.
Industry-specific applications will dominate
AI is moving to industry-specific applications. By 2027, organisations are expected to use small, task-specific models three times more than general-purpose LLMs.
In manufacturing, vision models at the edge support real-time quality inspection and predictive maintenance. In healthcare, AI assists clinical review by summarising records and producing auditable recommendations. In energy and utilities, GenAI improves field operations through faster knowledge retrieval and procedure guidance.
For senior decision-makers, the priority is a managed catalogue of small, reusable models—each with an owner, defined risk profile and clear KPI—rather than a single monolithic system.
The capability behind the technology
Sovereign IT capability is also emerging as a critical enabler of responsible AI at scale, as organisations move AI into production and confront rising inference costs and trust expectations. According to Sonia Eland, Executive Vice President and Country Head for Australia and New Zealand at HCLTech, there is growing interest in sovereign infrastructure and neocloud environments designed to host large language models. “These platforms give enterprises greater control over performance, pricing and data residency, particularly in regulated or geopolitically sensitive markets,” she said.
Rather than relying solely on hyperscale public clouds, Eland expects hybrid AI strategies to become the norm in 2026, combining global platforms with sovereign stacks closer to customers and regulators. This approach helps organisations manage cost-to-serve more predictably while providing greater assurance over how data is governed and where it lives. These are factors that are increasingly central to board-level AI decisions.
As AI adoption accelerates, the human capability behind it becomes just as important as the technology itself, Eland said.
"The next wave of AI value will depend on a workforce that blends technical excellence with creativity, cultural understanding and diverse lived experience," Eland said. "Creative problem-solving, multidisciplinary thinking and diverse teams are increasingly critical as enterprises shift from experiments to accountable AI in production.
"AI needs human imagination as much as engineering. When we broaden who builds and supervises these systems, we make them safer, smarter and more reflective of the communities they serve."
Under her leadership, HCLTech has expanded local programmes that open pathways into AI for early-career talent and under-represented communities, reflecting her view that inclusive skilling is essential to building trustworthy, future-fit systems.
The bottom line
If 2024–2025 was about proving AI can work, 2026 is about proving it can work responsibly and economically. That means accepting inference cost realities, making trust an explicit promise, resisting the urge to sprint into agentic everything without an operating model, and pivoting toward industry-specific applications where outcomes matter more than hype.
Leaders who take that path will turn AI from experiments into durable advantage.

iTnews Executive Retreat - Security Leaders Edition



