UII UPDATE 421 | OCTOBER 2025

Intelligence Update

Liquid-to-air eases DLC rollout, but mind the setpoints

In its idealized form, a direct liquid cooling (DLC) system enables compressor-free heat rejection by operating at high temperatures. Such a system removes heat from IT electronics using a comparatively hot coolant, then dissipates the heat through a warm facility water system and dry coolers. This approach can offer a substantial reduction in capital costs and energy use, as well as the ability to support more IT capacity in the same site power envelope.

In reality, however, few facilities are designed and optimized for DLC systems, and such implementations remain the exception. There are various reasons for this. The biggest of which is the need to support air-cooled IT loads using the same heat transport and rejection infrastructure. There is also a common preference to extract maximum performance from expensive compute silicon — while also lowering failure rates, a major availability concern for operators of large compute clusters. Both dictate lower temperatures.

There are also thousands of data centers worldwide with no facility water system at all. According to the Uptime Institute Data Center Cooling Systems Survey 2024, about three-quarters of data center operators report having one or more sites in their fleets that use other heat rejection methods, such as direct expansion (DX) air conditioning, indirect air-to-air heat exchange systems or direct outside air cooling.

Figure 1 Three-quarters of operators have sites in their fleet without a chilled water system

What type of air-cooling approaches does your organization utilize for cooling the IT in its data center(s)? Choose all that apply.

image

In these facilities, the most straightforward way to support DLC loads is through existing air-cooling systems combined with air-cooled coolant distribution units (CDUs). These thermal management solutions are known by several names: liquid-to-air (L2A), air-assisted liquid cooling (AALC), heat dissipation units (HDUs), heat rejection units (HRUs) or simply sidecars. Coolant options include water and dielectric engineered fluids, including two-phase coolants that remove heat from electronics through nucleate boiling.

This report discusses the general performance characteristics of air-cooled CDUs for single-phase (water or dielectric) cold plates. Future reports will provide an overview of all the liquid-to-air heat transport options that are commercially available to data center operators, including dielectric cold plates and immersion tanks.

Liquid-to-air systems have growing utility

Although not entirely new, larger-capacity air-cooled CDUs are a relatively recent development in data centers. What has arguably changed in 2025 is their positioning: contrary to some earlier expectations, air-cooled CDUs now appear poised to play a larger role in DLC adoption than previously thought. This means they are scaling up to support high-density racks and increasingly operate in group mode, where operational settings apply to all CDUs in the group — requiring appropriate network connections. Recently, Uptime Intelligence has seen an rise in inquiries on the topic from data center operators in the Uptime Institute Network.

Reference designs by Nvidia and AMD for current and next-generation compute clusters that support the training and inference workloads of large generative AI models — specifically generative pre-trained transformers (GPTs) — are predominantly liquid-cooled. Such compute racks, even as their thermal loads near or even surpass 100 kW, can be managed by air-cooled CDUs.

Even at scale, liquid-to-air heat transport can be a viable option, as demonstrated by Meta with its custom-designed rack solution for GPU clusters installed in the company’s large fleet of air-cooled facilities. At Hot Chips 2025, Meta engineers discussed the technical choices for its rack system — codenamed Catalina — which comprises two OCP compute racks (forming a 72-GPU compute block) and four air-cooled CDUs (AALCs in Meta’s parlance), each occupying a full-sized OCP rack.

Generic IT is also gradually moving closer to thermal power density points where the switch to liquid cooling becomes justified, if not a necessity. IT operators will want the option to install liquid-cooled hardware in any data center, not only in those where CDUs can be plumbed into a facility water system. Even in facilities with chilled water, the idea of bringing water pipes to the IT racks may not be attractive due to space constraints or concerns about large leaks from facility water pipes. Some colocation providers may, in particular, have reservations about introducing water into retail spaces where there is a mix of tenant racks side by side.

Air-cooled CDUs that support liquid-cooled IT racks offer ease, speed and flexibility in installation without the added complexities and risks often associated with plumbed CDUs. While some concerns over leaks persist and call for extensive leak-detection — in large part due to the high cost of potential damage to dense compute racks — the volume of water coolant in an air-cooled CDU system is relatively small.

Still, both data center facility and IT systems operators should consider several key factors when comparing the thermal performance of air-cooled CDUs with water-cooled CDUs:

  • Wide approach temperatures. Plumbed CDUs can support narrow approach temperature deltas — the difference between the facility water supply and the technology coolant supply — of 2-3°C (3.6-5.4°F) while maintaining near nominal capacity heat exchange. In contrast, air-to-liquid heat exchangers typically require a wide approach temperature in the range of 10-20°C (18-36°F) to achieve desired capacity levels.
  • Balancing flow rate needs with heat exchange capacity. A wide approach temperature means that the technology coolant temperatures supported by air-cooled CDUs are elevated. A supply temperature starting at 35°C (95°F) is practical, but it will probably trend closer to 40°C (104°F). In contrast, water-cooled CDUs can achieve coolant temperatures as low as 20°C (68°F), even with elevated chilled water supply.
    The temperature rise (∆T) between the supply and return sides of the technology coolant defines the thermal capacity of a given amount of coolant. If the elevated supply temperature results in a smaller ∆T than the IT hardware vendor assumed due to return temperature restrictions, the required flow rate will be higher than in the hardware’s specification. Return temperature restrictions exist to protect silicon performance, ensure long-term reliability and mitigate operational safety concerns about fluid temperatures above 60°C (140°F). 
    Higher flow rates, in turn, can significantly reduce the effective capacity of the liquid-to-air heat exchanger, which is a factor that needs balancing when selecting the right CDU for a load.
  • Airflow requirements. The coolant removes highly concentrated thermal power from the relatively small surface area of packaged IT electronics, then the CDU dissipates the same thermal power using the facility’s cold air supply. Still, this process can be more efficient than standard air-cooled IT hardware because of the higher return coolant temperature, a much larger surface area that utilizes air more effectively, and no need for additional airflow to prevent components from overheating downstream of CPUs or GPUs. 
    As a result, operators can achieve a lower total, room-level airflow requirement per kilowatt of IT load at under 170 cubic meters per hour (100 cubic feet per minute) with a 15°C (27°F) approach. However, delivering all that air to a relatively small space still calls for strong airflow management practices to avoid hot spots and loss of CDU cooling capacity. Total infrastructure energy performance (including both facilities and IT energy) should also increase, as air-cooled CDUs use much larger fans than those in IT systems.
  • Space, distance between compute racks. The most visible trade-off when deploying air-cooled CDUs, compared with much denser water-cooled CDUs, is their large footprint on the IT floor. Depending on product design, operational conditions and IT rack power, air-cooled CDUs can require anywhere from half to as much as double the space of the IT racks they cool. This excludes any redundant equipment installed for concurrent maintainability. 
    The nameplate capacity of air-cooled CDUs tends to fall in the range of 70-150 kW per standard 19-inch rack space, depending on operational assumptions. This requirement also means that multi-rack compute clusters may require longer cable runs for high-speed interconnect fabrics, which can increase costs and, in some cases, affect application performance. However, space itself is unlikely to be a limiting factor: even if air-cooled CDUs take up as much as or even more space than the supported IT racks, the net average power density will still be several times the design density (e.g., 25-50 kW versus 10 kW per rack) of most facilities originally designed for air-cooling.

In summary, DLC with air-cooled CDUs uses more power and space, as well as requiring more careful product selection to support the thermal loads of modern high-density IT hardware than a comparable solution using water-cooled CDUs. However, it remains a viable upgrade option for operators of data centers without a facility water system that want to support DLC racks.

An upcoming Uptime Intelligence report will examine a selection of air-cooled CDU products by major equipment manufacturers and assess how they can practically support high-density liquid-cooled IT racks and immersion tanks.

The Uptime Intelligence View

When full-sized air-cooled CDUs first appeared only a few years ago, the industry consensus was that they would serve merely as a stop-gap role — supporting sporadic, small- and mid-sized IT installations before an operator transitioned to a fully liquid-cooled infrastructure. However, it is now clear that liquid-to-air cooling systems are important in supporting the rollout of dense compute racks across a much wider range of data centers, giving IT operators greater deployment choice while extending the useful life of thousands of data centers globally. These benefits far outweigh the negatives of increased power consumption and space requirements.

 

Other related reports published by Uptime Institute include:
Self-contained liquid cooling: the low-friction option 
Consensus weakens on rack density tipping point for DLC 
Operators warming up to dielectric cold plates 
Guiding questions for liquid-cooled colocation planning

 

About the Author

Daniel Bizo

Daniel Bizo

Over the past 15 years, Daniel has covered the business and technology of enterprise IT and infrastructure in various roles, including industry analyst and advisor. His research includes sustainability, operations, and energy efficiency within the data center, on topics like emerging battery technologies, thermal operation guidelines, and processor chip technology.

Posting comments is not available for Network Guests