UII UPDATE 421 | OCTOBER 2025
In its idealized form, a direct liquid cooling (DLC) system enables compressor-free heat rejection by operating at high temperatures. Such a system removes heat from IT electronics using a comparatively hot coolant, then dissipates the heat through a warm facility water system and dry coolers. This approach can offer a substantial reduction in capital costs and energy use, as well as the ability to support more IT capacity in the same site power envelope.
In reality, however, few facilities are designed and optimized for DLC systems, and such implementations remain the exception. There are various reasons for this. The biggest of which is the need to support air-cooled IT loads using the same heat transport and rejection infrastructure. There is also a common preference to extract maximum performance from expensive compute silicon — while also lowering failure rates, a major availability concern for operators of large compute clusters. Both dictate lower temperatures.
There are also thousands of data centers worldwide with no facility water system at all. According to the Uptime Institute Data Center Cooling Systems Survey 2024, about three-quarters of data center operators report having one or more sites in their fleets that use other heat rejection methods, such as direct expansion (DX) air conditioning, indirect air-to-air heat exchange systems or direct outside air cooling.
Figure 1 Three-quarters of operators have sites in their fleet without a chilled water system
What type of air-cooling approaches does your organization utilize for cooling the IT in its data center(s)? Choose all that apply.
In these facilities, the most straightforward way to support DLC loads is through existing air-cooling systems combined with air-cooled coolant distribution units (CDUs). These thermal management solutions are known by several names: liquid-to-air (L2A), air-assisted liquid cooling (AALC), heat dissipation units (HDUs), heat rejection units (HRUs) or simply sidecars. Coolant options include water and dielectric engineered fluids, including two-phase coolants that remove heat from electronics through nucleate boiling.
This report discusses the general performance characteristics of air-cooled CDUs for single-phase (water or dielectric) cold plates. Future reports will provide an overview of all the liquid-to-air heat transport options that are commercially available to data center operators, including dielectric cold plates and immersion tanks.
Although not entirely new, larger-capacity air-cooled CDUs are a relatively recent development in data centers. What has arguably changed in 2025 is their positioning: contrary to some earlier expectations, air-cooled CDUs now appear poised to play a larger role in DLC adoption than previously thought. This means they are scaling up to support high-density racks and increasingly operate in group mode, where operational settings apply to all CDUs in the group — requiring appropriate network connections. Recently, Uptime Intelligence has seen an rise in inquiries on the topic from data center operators in the Uptime Institute Network.
Reference designs by Nvidia and AMD for current and next-generation compute clusters that support the training and inference workloads of large generative AI models — specifically generative pre-trained transformers (GPTs) — are predominantly liquid-cooled. Such compute racks, even as their thermal loads near or even surpass 100 kW, can be managed by air-cooled CDUs.
Even at scale, liquid-to-air heat transport can be a viable option, as demonstrated by Meta with its custom-designed rack solution for GPU clusters installed in the company’s large fleet of air-cooled facilities. At Hot Chips 2025, Meta engineers discussed the technical choices for its rack system — codenamed Catalina — which comprises two OCP compute racks (forming a 72-GPU compute block) and four air-cooled CDUs (AALCs in Meta’s parlance), each occupying a full-sized OCP rack.
Generic IT is also gradually moving closer to thermal power density points where the switch to liquid cooling becomes justified, if not a necessity. IT operators will want the option to install liquid-cooled hardware in any data center, not only in those where CDUs can be plumbed into a facility water system. Even in facilities with chilled water, the idea of bringing water pipes to the IT racks may not be attractive due to space constraints or concerns about large leaks from facility water pipes. Some colocation providers may, in particular, have reservations about introducing water into retail spaces where there is a mix of tenant racks side by side.
Air-cooled CDUs that support liquid-cooled IT racks offer ease, speed and flexibility in installation without the added complexities and risks often associated with plumbed CDUs. While some concerns over leaks persist and call for extensive leak-detection — in large part due to the high cost of potential damage to dense compute racks — the volume of water coolant in an air-cooled CDU system is relatively small.
Still, both data center facility and IT systems operators should consider several key factors when comparing the thermal performance of air-cooled CDUs with water-cooled CDUs:
In summary, DLC with air-cooled CDUs uses more power and space, as well as requiring more careful product selection to support the thermal loads of modern high-density IT hardware than a comparable solution using water-cooled CDUs. However, it remains a viable upgrade option for operators of data centers without a facility water system that want to support DLC racks.
An upcoming Uptime Intelligence report will examine a selection of air-cooled CDU products by major equipment manufacturers and assess how they can practically support high-density liquid-cooled IT racks and immersion tanks.
When full-sized air-cooled CDUs first appeared only a few years ago, the industry consensus was that they would serve merely as a stop-gap role — supporting sporadic, small- and mid-sized IT installations before an operator transitioned to a fully liquid-cooled infrastructure. However, it is now clear that liquid-to-air cooling systems are important in supporting the rollout of dense compute racks across a much wider range of data centers, giving IT operators greater deployment choice while extending the useful life of thousands of data centers globally. These benefits far outweigh the negatives of increased power consumption and space requirements.
Other related reports published by Uptime Institute include:
Self-contained liquid cooling: the low-friction option
Consensus weakens on rack density tipping point for DLC
Operators warming up to dielectric cold plates
Guiding questions for liquid-cooled colocation planning