UII UPDATE 415 | OCTOBER 2025

Intelligence Update

Neoclouds: AI’s shock absorbers

With booming interest in generative AI, a new breed of cloud providers — known as neoclouds — has emerged. Neoclouds are the broker-dealers of AI infrastructure, supplying access to large clusters of GPUs to enterprises and hyperscalers looking to build ever-larger AI models.

Neoclouds are primarily commodity players (see What is the outlook for GPU cloud providers?). Although they may offer some additional services, they predominantly match supply with demand, pricing their offerings inexpensively to attract volume and focusing on a single commodity — GPUs. This price-driven focus contrasts with hyperscale cloud providers, who offer a wide range of services, adding value through the management of infrastructure, platforms and software.

Neoclouds face substantial risks: their business model relies on continued high demand for AI infrastructure, combined with limited supply. But with risk comes opportunity: the neocloud segment has reportedly grown at an 85% compound annual growth rate since 2021, across an estimated 190 companies.

If demand for AI continues to grow, neoclouds are well placed to deliver and capitalize on it. Chip manufacturers, equity funds and even hyperscalers themselves are placing strategic bets on neoclouds, often accompanied by much media fanfare. But the relationships between these entities can be confusing and counterintuitive. At the present time, the existence of neoclouds benefits Nvidia, colocation providers, hyperscale cloud operators and investors. Their future, however, is uncertain given that, unlike other infrastructure businesses, their main purpose is to distribute risk throughout the ecosystem.

Nvidia broadens its horizons

Recent reports have identified three unnamed customers accounting for 53% of Nvidia’s data center revenue — widely assumed to be major hyperscalers: AWS, Google and Microsoft.

Hyperscalers generate a significant volume of sales for Nvidia, but there is a downside to relying on such a limited market of buyers. With so few customers competing to buy Nvidia products, each hyperscaler retains substantial bargaining power. As a result, Nvidia's gross margins are reduced. Nvidia and other chip manufacturers are keen to work with neoclouds because they expand the buyer pool, paying closer to list prices and thereby propping up margins.

By expanding its customer base, Nvidia also dilutes its concentration risk: the changing buying patterns of a single customer are less likely to impact overall revenue and profitability.

Finally, neoclouds act as distributors, providing access to GPUs to a broader enterprise audience, some of which might be unable to afford GPUs through capital purchases or leasing them from traditional cloud vendors.

Nvidia has a vested interest in the success of neoclouds. To allow them to compete with hyperscalers, Nvidia often provides preferential access to hardware, which can accelerate lead times for capacity allocation. Nvidia has also invested directly in neoclouds. According to reports, the firm owns a 7% stake in CoreWeave and a small stake in Nebius. Meanwhile, AMD has made a strategic partnership with Crusoe.

Neoclouds need funds

Counterintuitively, hyperscalers have also invested in neoclouds. In August, Google made a $3.2 billion financial commitment to TeraWulf, increasing its equity stake to 14%; and in September, Microsoft signed a $17–19 billion, five-year GPU lease with Nebius.

This investment is worthwhile because hyperscalers need more GPU capacity. By renting capacity from neoclouds, cloud providers can temporarily increase their own capacity to meet demand. More importantly, neoclouds enable hyperscalers to purchase capacity as operating expenditure, rather than through a substantial capital investment. Consuming GPUs as a service avoids locking in capital assets and helps maintain reasonable debt ratios. It also allows cloud providers to scale their spending up or down in response to demand, rather than incurring depreciation and stranded assets.

What neoclouds lack is collateral for the rapid expansion of their data centers. Financial commitments from hyperscalers lend credibility to the neoclouds' plans, helping them to raise capital. Through these commitments, hyperscalers agree that if the neocloud defaults or cannot sell excess capacity, the hyperscaler will buy back the GPUs or absorb the lease obligations. These backstop commitments typically remain off-balance-sheet as contingent liabilities unless activated — and give lenders confidence that a financial return is likely, even if a neocloud fails.

Nvidia, too, is effectively underwriting neocloud risk. In September 2025, CoreWeave and Nvidia signed a $6.3 billion cloud capacity agreement, under which Nvidia is obligated to purchase any unsold compute capacity from CoreWeave until 2032.

With the backing of Nvidia and hyperscalers, neoclouds are better equipped to gain capital investments. GPU infrastructure is a substantial investment, costing significantly more than traditional, CPU-focused hardware. For instance, a Nvidia DGX server with eight GPUs can cost more than $400,000. Hundreds of these servers may need to be used to train an LLM.

Some capital is also used for data center builds. Data centers built specifically for high density AI workloads are two to three times more expensive per MW than data centers built for traditional IT.

With demand high, neoclouds need to also utilize colocation data centers to meet their primary objective of matching supply to demand. As brokers, the value they provide is in giving buyers access to capacity whenever they need it. Without colocation data centers, they are unlikely to scale quickly enough to fulfil their purpose.

According to Uptime Institute's 2025 global survey of data center managers, 62% of colocation providers host hyperscalers, with 44% of facility space allocated to their operations. Neoclouds, too, are likely to emerge as significant colocation users.

CoreWeave, the world’s largest neocloud, has historically relied on colocation providers CyrusOne and Switch for rapid deployments before expanding its own campuses. In May 2025, colocation provider Aligned Data Centers announced a partnership that will integrate Lambda's AI cloud platform with Aligned's new data center in Plano (Texas, US).

Figure 1 shows how different entities in the Nvidia GPU ecosystem work symbiotically to derive mutual benefits.

Figure 1 Relationship benefits in the Nvidia GPU ecosystem

image

In both neoclouds and hyperscaler clouds, customers are primarily enterprises and AI startups that are training AI models. However, the use case is different. Hyperscaler cloud providers offer a range of services that are used in combination to deliver interactive applications. In contrast, neoclouds are primarily used for training AI models. The neoclouds revenue is driven solely by the delivery of AI capacity; hyperscalers are not so reliant on AI, having diverse revenue streams through multiple products and services.

Where lies the risk?

If demand for GPU capacity continues to grow, the investments and relationships between these parties make financial sense. In this case, enterprises will want to train larger and more advanced AI models, which demand new hardware — and more of it. Neoclouds will consume increasing amounts of colocation space to situate more infrastructure, funded by eager lenders. As neoclouds, compete for precious GPUs, they bid for more infrastructure, consequently increasing Nvidia's margins — and generating further demand and investment. The hyperscalers’ and Nvidia's commitments are fully utilized, thereby generating value. No commitments are written off as losses.

For some neoclouds, there are signs that this level of demand will continue. In March 2025, CoreWeave announced an agreement to supply AI infrastructure to OpenAI, with a deal value of up to $11.9 billion over a five-year period. In June 2025, Applied Digital signed two 15-year leases with CoreWeave, valued at $7 billion over the lease period.

Such demand guarantees are not available for all neoclouds. Demand can dip for several reasons:

  • Difficulty in identifying or quantifying the value in training or retraining ever-larger AI models.
  • The adoption of cheaper hyperscaler ASICs over GPUs.
  • Increased competition from established vendors, such as AMD and Intel or new market entrants.
  • A glut of GPU capacity accumulated by cloud providers could exceed market demands.

If demand for AI infrastructure drops, hyperscalers have some protection in that they can consume less from their neocloud partners and let leases expire. Hyperscalers offer additional services; this diverse portfolio of revenue streams could help mitigate a decline in demand for AI. Their unused financial commitments would be categorized as economic losses, but these losses would be lower than a substantial investment in data centers. Similarly, Nvidia's commitments are not large enough to present an existential threat to its business.

Neoclouds and their investors bear the bulk of the risk. They are the ones who have invested capital in data centers and AI infrastructure. Their revenue depends on AI; there are no diverse revenue streams to reduce the impact of declining demand. If a neocloud relies on a single anchor client, any reduction in spend from that client would result in a meaningful drop in revenue.

In response, colocation providers might see leases cancelled or demand for services decline. But it is unlikely that colocation data centers will face an existential threat to their business — they are generally not the ones speculating on the continued growth of AI: they are the enablers, not the risk bearers.

The Uptime Intelligence View

To fund rapid expansion, hyperscalers and hardware manufacturers are financially supporting and underwriting the development of neoclouds. To meet rapid growth in AI, many neoclouds use colocation data centers to host their infrastructure. These neoclouds operate in a manner more resembling broker-dealers than cloud providers. They match supply to demand, but they also absorb the brunt of capacity planning shocks. If demand drops, hyperscalers, colocation providers and chip manufacturers will not necessarily be the parties with the loss-generating assets; it will be the neoclouds. Some neoclouds will thrive, others will be acquired and merged — and some will not survive.

About the Author

Dr. Owen Rogers

Dr. Owen Rogers

Dr. Owen Rogers is Uptime Institute’s Senior Research Director of Cloud Computing. Dr. Rogers has been analyzing the economics of cloud for over a decade as a chartered engineer, product manager and industry analyst. Rogers covers all areas of cloud, including AI, FinOps, sustainability, hybrid infrastructure and quantum computing.

Posting comments is not available for Network Guests