UII UPDATE 459 | JANUARY 2026

Intelligence Update

Enterprises begin to demand returns from generative AI

OPINION

2025 was a big year for generative AI. Unprecedented investment and intense hype pushed generative AI into the core of CIO strategies, onto product roadmaps and into boardrooms. Consumers and businesses have experimented aggressively, and AI features have become commonplace in both consumer and enterprise applications.

But this hype will plateau. Generative AI has enormous promise, but it will not be integrated into every application, process or device, while some of its limitations — and its immaturity — will be exposed in 2026. Consequently, Uptime Intelligence expects that enterprises will become more selective; they will prioritize use cases that deliver measurable value and shut down those that do not. Some AI applications will prove genuinely useful and become part of day-to-day life for consumers and employees, but others will quietly disappear before costs spiral out of control. In 2026, expect fewer headline-grabbing experiments and more mature, grounded implementations as enterprises seek better returns on their AI investments.

Gen AI takes off

Traditional AI systems — such as predictive analytics, computer vision and optimization models — are already embedded in systems operating across industries, including financial services, manufacturing, healthcare, retail, energy and telecommunications. Enterprises will continue to expand these deployments to improve efficiency, resiliency and decision-making, and successes in this area will no doubt be confused, sometimes deliberately, with newer forms of AI.

Most of the recent hype around AI refers to generative AI: models that can create new content — such as text, code, images, audio or video — based on patterns learned from large datasets, rather than simply analyzing or classifying existing data. In contrast to traditional AI, it is newer, more compute-intensive and less predictable (see AI in data: sorting reality from hallucination). Its novelty has fueled media attention, market speculation and investment.

As a result, over the past 12 months, generative AI has experienced a dramatic surge in both investment and adoption. Global venture capital flows into generative AI startups nearly doubled in 2024, compared with 2023, rising from roughly $24 billion to around $45 billion, according to US law firm Mintz.

Enterprise adoption is increasing, too. McKinsey’s State of AI in 2025 survey reports that 79% of organizations are using generative AI in at least one business function, up from 71% a year earlier. Around 62% of respondents are also experimenting with AI agents — tools that can autonomously plan, decide and act across multiple steps to complete tasks on users’ behalf.

Value is unclear

Despite rapid adoption, a growing sense of disillusionment regarding generative AI is emerging across enterprises, driven by a widening gap between experimentation and realized value. A global PwC survey found that 56% of chief executives have seen no significant financial benefit from AI, and only 12% reported both increased revenues and reduced costs. Furthermore, a report from MIT’s NANDA initiative found that 95% of AI projects fail, with only around 5% achieving measurable, scaled impact.

While AI leaders argue the technology will eventually deliver productivity gains, most organizations remain stuck in pilot phases and struggle to scale deployments that produce measurable business impact.

At the same time, enterprises have underestimated the operational complexity of deploying generative AI at scale. Data quality, governance, security and compliance remain persistent barriers, while hallucinations and regulatory exposure continue to limit deployment in highly regulated sectors such as financial services, healthcare and the public sector. Many organizations are running large portfolios of AI pilots in parallel but struggle to industrialize more than a small fraction of them, reinforcing the perception that AI is powerful but difficult, expensive and uneven in its returns.

The market is complex

Meanwhile, vendor and platform volatility continues — model architectures, pricing structures and deployment patterns are shifting so quickly that organizations are unsure where to place their bets.

An interdependent ecosystem has emerged among neoclouds, hyperscalers, chip manufacturers and colocation operators. Neoclouds act as brokers of GPU capacity, enabling hyperscalers and enterprises to scale AI workloads without heavy upfront capital investments (see Neoclouds are AI shock-absorbers). Hyperscalers, such as Google and Microsoft, provide financial commitments and long-term leases to neoclouds, which help these smaller players secure funding for data center expansion, often through colocation providers.

This self-reinforcing ecosystem has begun to concern investors, who increasingly question the long-term financial stability of neoclouds and the broader technology industry.

Mixed opportunities

For all the attention given to large infrastructure, cloud and software companies, we expect the focus to shift more toward applications over the next few years (see AI uncertainty: bubble trouble brewing). Generative AI is working and delivering value across a narrow but expanding set of use cases, especially when augmenting existing human processes rather than attempting full automation.

The strongest and most repeatable gains are emerging in customer service, software development, document processing and knowledge retrieval. Studies from MIT and Stanford show that AI copilots can increase call center productivity by up to 35%, depending on agent experience, while developers using coding assistants complete tasks around 55% faster, according to GitHub research. Summarization and drafting tools can reduce task time by up to 80%.

A recent Uptime Intelligence report (see How financial institutions are using AI and cloud today) explains how financial services companies are deploying AI. Some organizations have prioritized specific use cases to deliver fast, measurable returns with low deployment risk. These include:

  • Morgan Stanley's internal AI assistant — used by 16,000 advisors across 100,000 documents; reduced information retrieval time from minutes to seconds and achieved 98% adoption within months.
  • Walmart’s support team assistant — delivered double-digit productivity gains for support staff.
  • Siemens’ maintenance assistant — reduced fault diagnosis times and maintenance cycles for engineers.
  • Mayo Clinic’s note-taker — captures notes for clinicians, enabling them to focus on diagnosis.

These use cases generally augment employees rather than directly replacing them.

By contrast, generative AI is struggling to deliver consistent value in areas that require complex reasoning, high accuracy, deep domain expertise or complete process replacement. Large-scale automation of professional judgment — such as legal work, medical diagnosis, financial decision-making or fully autonomous operations — remains costly, risky and highly regulated. Attempts to replace humans, rather than support them, have led to several well-publicized failures:

  • Air Canada chatbot — hallucinated legal policy, for which the airline was found liable.
  • Klarna’s customer service bot — replaced 700 support agents with AI, but later rehired staff when customer experience suffered.
  • Epic sepsis model — high false-positives and missed diagnoses led to a reduction in automated use.
  • Sports Illustrated AI journalist — generated articles under fictitious personas, triggering advertiser backlash, which halted the practice.

No enterprise is likely to reject generative AI outright; instead, adoption is uneven, marked by a mix of clear successes and inevitable failures.

Enterprises will be more cautious

Enterprises are experimenting with and deploying generative AI where it makes financial sense — factoring in deployment costs, expected return on investment, and exposure to regulatory, legal and economic risks.

In 2026, enterprises will increasingly focus on this financial decision:

  • They will scrutinize whether a generative AI project will deliver a return before making an investment.
  • They will abandon projects sooner if the costs are unlikely to provide rapid returns.
  • They will concentrate on smaller, lower-cost projects proven to deliver value quickly.

In practice, this means focusing on use cases that deliver the most substantial and repeatable gains, such as customer service, software development, document processing and knowledge retrieval. These projects often do not require large budgets or a reinvention of the organization or its structure. Instead, much of the required capability can be delivered out of the box using pretrained foundation models and technologies such as retrieval-augmented generation, which combines existing data sources with generative AI.

Crucially, these projects are designed to aid employees rather than replace them entirely. Keeping employees involved in processes reduces customer risk exposure. In principle, employees remain accountable and responsible for verifying AI output. They can make decisions based on a broader context — such as a customer’s specific circumstances — that AI models would struggle to do.

Most enterprises will adopt generative AI for these use cases, and it will become increasingly embedded in the day-to-day lives of both employees and consumers.

Some speculative bets are likely

Although enterprises are becoming more realistic about generative AI, they will still take calculated risks where appropriate. Ultimately, these decisions come down to return on investment and risk tolerance. For example, building dedicated infrastructure and training AI models to discover new drugs and treatments does not guarantee a return. However, the payoff from such a gamble can be substantial if breakthroughs are achieved.

Enterprises will continue to invest in more complex generative AI projects where:

  • The cost of inaction is a loss of competitive advantage or a threat to the organization.
  • The project is likely to create a strong differentiator or a new revenue-generating product tied to proprietary data or intellectual property owned by the enterprise.
  • The potential payoff from a reasonable gamble is significant if the project is successful.

In 2026, companies will demand more evidence before making such investments.

Impact on the ecosystem

Generative AI growth will continue. While enterprises may act more cautiously, they will still spend on AI in areas where it makes sense.

However, much of this continued growth will be driven by hyperscalers and model developers. These players sit firmly in the speculative-bet camp, locked in a race to develop the most advanced models and make them available to the broadest range of markets. They also risk losing out if they do not keep pace with competitors. For these organizations, new AI products represent new market opportunities and deeper integration with existing products and services. New research might also lead to products yet to be roadmapped. Google, Microsoft, Meta and Amazon are expected to spend $350 billion collectively in 2025, according to investment firm KKR — around 30% more than the previous year. Much of this investment will go toward data center builds, colocation facilities, and server and networking infrastructure.

Generative AI is here to stay, but its trajectory will become more realistic. Growth and interest will continue, but perhaps not as much as markets expect.

As a result, a market correction is possible as enterprises and investors reassess the true economic potential of AI. Overheated valuations, spending plans and expectations may retreat toward realistic fundamentals. Any market correction is likely to reprice assets and redirect capital flows rather than reverse the underlying adoption of AI in enterprise workflows.

However, uncertainty remains over which players will dominate this maturing landscape, and not all will survive such a correction. Chipmakers, hyperscalers, neoclouds, model developers and AI startups increasingly compete, collaborate and depend on each other in complex ways, making the fallout of a correction difficult to predict. There will likely be consolidation, bankruptcies and emerging dominant players.

Such a correction, however, will not derail AI. Instead, it will clarify where value exists and move the industry away from hype toward practical, tangible impact. Generative AI will not be less important — only less remarkable — a sign that it is maturing.

The Uptime Intelligence View

High valuations, rapidly forming partnerships, cutting-edge innovation and changing geopolitical landscapes have made analyzing generative AI’s value (real and economic) challenging. However, at the core of this complex landscape are AI applications. Evidence suggests that AI applications will continue to be developed by both enterprises and hyperscalers, sustaining demand for AI infrastructure and data centers will continue. At some point, though, enterprises will need to work out what they should do with AI rather than simply what they can do. When this shift happens, AI demand may not grow as quickly as markets expected.

About the Author

Dr. Owen Rogers

Dr. Owen Rogers

Dr. Owen Rogers is Uptime Institute’s Senior Research Director of Cloud Computing. Dr. Rogers has been analyzing the economics of cloud for over a decade as a chartered engineer, product manager and industry analyst. Rogers covers all areas of cloud, including AI, FinOps, sustainability, hybrid infrastructure and quantum computing.

Posting comments is not available for Network Guests