UII UPDATE 408 | SEPTEMBER 2025

Intelligence Update

Guiding questions for liquid-cooled colocation planning

Many organizations deploy their liquid-cooled high-density IT in colocation data centers — and in the next few years, this demand is likely to grow. Providers planning to offer “liquid-ready” colocation capacity often need to plan to accommodate a varying (or uncertain) set of tenant cooling equipment and design specifications, as well as changes to operational procedures. The corresponding service level agreements (SLAs) also need to be tailored to individual tenant needs.

Uptime Intelligence has received multiple inquiries for guidance on liquid-cooled colocation planning and has had in-depth briefings with colocation providers that currently accommodate direct liquid cooling (DLC). Colocation providers agree on the central questions that need to be addressed to support DLC in their facilities economically. However, concrete or quantitative answers to these questions often remain confidential — because of tenant agreements or to safeguard competitive advantage.

This report summarizes guiding questions about DLC design and operations that Uptime Intelligence has discussed with current and future providers of liquid-cooled colocation space. Uptime Intelligence will continue to research colocation providers’ successful approaches to hosting DLC in their facilities. As DLC proliferates and operators accumulate experience, Uptime Intelligence anticipates that more operators may be willing to share their guidance.

Defining design objectives

The following questions summarize some of the most important considerations in designing colocation space for liquid-cooled IT. Colocation providers expect tenants to provide detailed specifications for their IT hardware’s thermal management needs. Otherwise, colocation providers planning liquid-ready space without a tenant already secured may choose to proceed based on informed estimates.

What is the thermal design rack density for the tenant IT?

Rack density is a key parameter in the design of data center power and cooling infrastructure. Providers building liquid-ready capacity today are often doing so for a single large tenant based on detailed (and often homogenous) specifications. Colocation providers planning to build liquid-ready capacity to be shared between smaller tenants (especially if building speculatively for future tenants) often need to allow for some uncertainty in terms of future tenant requirements.

Can the facility operate economically if tenants’ needs differ from the design rack density? How can the facility design accommodate future changes to infrastructure with minimal cost and disruption?

Tenants’ IT may call for rack density to be higher or lower than the design density, which represents a financial risk to the colocation operator. Capital expenditure can be challenging to recoup within the facility’s lifespan if cooling or power capacity is stranded. Flexibility, including the ability to make changes to infrastructure, is a significant advantage.

What type (or types) of DLC do the tenants require the facility to support?

The current slate of commercially available DLC equipment is technically fragmented. Multiple cooling technologies are available from various vendors. Many of these vendors offer equipment that is functionally similar but not interoperable. Although water cold plate systems continue to dominate DLC deployments, their coolant quality and pressure requirements, for example, often differ in seemingly minor yet technically important ways, preventing them from sharing the same coolant network. Standardization efforts are ongoing, and many IT vendors offer design guidance to support their own products, but heterogeneity (and resulting uncertainty) is likely to persist for several more years.

Will the facility need to accommodate DLC equipment of multiple types, or from multiple manufacturers, using the same space and cooling infrastructure?

Optimal facility water temperature setpoints and flow rates depend on the particular IT and DLC equipment they will support. When chilled water infrastructure supports a varied mix of DLC equipment, colocation providers may need to follow the most restrictive requirements (e.g., lowest required supply temperature, highest required flow rate), possibly to the detriment of energy efficiency.

Will the facility need to install and maintain some DLC components, such as a coolant distribution unit (CDU)?

Some tenants may request that their colocation provider install CDUs in the gallery and supply IT coolant into the data hall rather than facility water. Without a committed tenant, colocation providers may wish to design their facilities to accommodate this equipment later. This places responsibility for the CDU and IT coolant on the colocation staff (implications for operations and SLAs are discussed in the following section).

What are the key design conditions for the DLC system?

Nominal CDU capacity ratings are typically not immediately comparable as they follow different assumptions. Standards bodies are working to issue guidance for design assumptions and test conditions. The nominal capacity of a DLC system is defined by several factors:

  • Approach temperature (delta).
  • Absolute operating temperatures.
  • Pumping capacity and secondary fluid network pressure.
  • Coolant flow-rate requirements of the IT load.
  • Thermal stability performance objectives.

Heat transfer capacity is largely defined by the temperature differential (“approach“) between the two streams of facility water supply (CDU primary inlet) and the coolant supply (CDU secondary outlet).

Capacity is also affected by the absolute temperatures because thermal conductivity and convection improve significantly at higher operating temperatures. Modern CDUs are often optimized for narrow approach, such as 3-5°C (4.8-9°F) delta, to be able to support higher facility water temperatures for efficient heat rejection by the chiller plant, although this requires greater pumping power.

The capacity derived from available pumping power will be dictated by the pressure in the DLC fluid network and the flow rate needed at a given coolant temperature to keep IT hardware below target temperatures. Pressure is hydraulically complex in a water cold plate system, and relies on factors such as granularity of filtration and internal structure of the cold plates. Vendors and standards bodies expect typical flow rates of 1-1.5 L/min·kW (liters per minute per kilowatt) for current and future water cold plate installations.

A further consideration is thermal stability, as dictated by IT hardware specifications or the customer: temperature tolerances and rate of temperature change permitted. These are typically specified at either the facility water inlet of the CDU’s heat exchanger or the cold plate supply. Sudden changes in thermal load (as seen with large AI training clusters) are difficult for mechanical equipment to track. Strict thermal stability requirements will likely call for thermal energy storage (e.g. water tanks) on either or both side of the CDU. For further discussion of cooling infrastructure considerations, see Uptime Intelligence’s report AI and cooling: chilled water system topologies.

How much air-cooling capacity will the data hall require?

Even a data hall with exclusively liquid-cooled compute will almost always require air cooling capacity. Storage and networking equipment is frequently air-cooled. Servers equipped with cold plates typically rely on air cooling for roughly 15% to 30% of their heat output. Detailed information about the tenant’s IT hardware can clarify this distribution of the heat load between the air cooling and liquid cooling. Crucially, many tenants may request cooling to be overprovisioned to provide some margin for later changes to either IT hardware or coolant temperature. For example, if a tenant is deploying 80 kW racks whose cold plates capture 75% of the heat load, they would theoretically require 60 kW of liquid cooling and 20 kW of air cooling capacity per rack. The tenant might specify as much as 68 kW of liquid cooling and 36 kW of air cooling per rack (overprovisioning by a total of 30% in this case).

Adapting operations and SLAs

DLC deployment brings changes to operations and maintenance. In colocation, this can require adding new DLC-specific terms to SLAs or restructuring them entirely. Often, SLAs need to be tailored to individual tenants’ equipment choices and resiliency objectives.

What tasks will be performed by colocation facility staff and what tasks will be performed by tenant IT staff?

The industry has not established a consensus to divide ownership and maintenance of DLC components between facilities teams and IT teams (see Hold the line: liquid cooling’s division of labor). Even in cases where DLC equipment is contained entirely within the white space, some tenants may prefer to rely on facilities teams for maintenance of IT coolant quality, CDUs, manifolds, or tubing connections — especially if their own IT teams lack the relevant skills. Conversely, tenants leasing entire data halls may prefer that only their IT staff are permitted to enter. Colocation providers may wish to begin discussing this division of responsibility with their prospective tenants as early as possible.

If facility staff will be responsible for multiple types of DLC, what safeguards can ensure they use proper procedures, maintenance intervals and parts?

In some cases, colocation staff will perform maintenance on multiple different versions of DLC equipment (sometimes in the same data hall), each with different specifications for IT coolant chemistry, filtration, and so on. This can force colocation providers to stock and track more service parts and fluid additives. Strict adherence to the correct maintenance procedure for each piece of equipment is crucial.

Can the provider draft a few baseline SLA versions that suit the most common tenant needs?

Colocation providers hosting DLC need to write SLAs differently than they would for air cooling. Within the realm of DLC in colocation, all the preceding questions must be codified in the SLA — demanding some degree of customization. Colocation providers who are building speculatively for tenants with common (or known) requirements may prefer to draft several baseline versions of the SLA, to help minimize the number of changes needed for each individual tenant.

Smaller tenants need answers

Currently, demand for colocation capacity (and data center capacity overall) outpaces supply. DLC-capable colocation capacity is still a niche offering, and smaller organizations tend to move more cautiously with their adoption of high-density IT. There are likely very few of these smaller enterprises seeking colocation space for liquid-cooled IT today. Colocation providers are likely to prefer wholesale leasing to hyperscale customers, given that they are more likely to bring detailed specifications and relatively homogenous equipment.

However, the rollout of DLC will almost certainly include smaller enterprises — even if it takes several more years. The ongoing development of standards promises to simplify the questions in this report and reduce the fragmentation that makes liquid-cooled colocation challenging today. If demand for liquid-ready retail colocation capacity grows substantially, colocation providers will need to be prepared to address the above questions to meet this demand.

The Uptime Intelligence View

Colocation providers who wish to support DLC for tenants’ high-density IT need to address fragmentation and new pressures on facility design, operations and SLAs. Although providers generally agree on the core questions to address, they tend to prioritize large tenants with detailed specifications and more homogenous equipment. Standardization may ease fragmentation, but rising demand for DLC capability in shared colocation space may press providers to become more comfortable with uncertainty.

 

Other related reports published by Uptime Institute include:
Hold the line: liquid cooling’s division of labor
AI and cooling: chilled water system topologies

About the Author

Jacqueline Davis

Jacqueline Davis

Jacqueline is a Research Analyst at Uptime Institute covering global trends and technologies that underpin critical digital infrastructure. Her background includes environmental monitoring and data interpretation in the environmental compliance and health and safety fields.

Posting comments is not available for Network Guests