UII UPDATE 495 | MAY 2026
For many years — well before AI compute's "big bang" in late 2022 — the data center industry has been preoccupied with the notion of designing denser racks to improve overall infrastructure economics. It has also regularly advocated for the use of direct liquid cooling (DLC). Indeed, the underlying megatrend is clearly toward increased density, pushed on by the gradual rise in power of IT silicon components. This is compounded by a trend toward richer server configurations and the economic drive to increase system utilization — despite real-world data on the latter being sparse. Typical (modal) rack densities are now shifting toward 10 kW with more than a quarter of operators reporting densities above this threshold (see Uptime Institute Global Data Center Survey 2025).
However, an exclusive focus on densification and DLC (as if they were inevitable) risks becoming tunnel vision that ignores costs and alternative choices. There are several factors to consider when planning a technical roadmap for IT thermal management:
The alternative to increased densification as an inherent benefit is to take the opposite approach: adopt a larger form factor server chassis, such as 2U instead of 1U, as standard. A higher profile chassis can accommodate larger and/or more fans and offers space for larger heat sinks. Crucially, it allows for better air-flow management and less pre-heating of air for downstream components, such as memory modules or optical transceivers.
Larger, more efficient fans can consume dramatically less power than their fast-rotating small-profile counterparts. Uptime Intelligence recently spoke to several IT vendors and they all agreed that there is substantial gain from opting for a 2U form factor instead of 1U, wherever possible, as fan power consumption follows the cubic law with rotational speed.
Importantly, the difference in power consumption between 1U and 2U systems is apparent not only at very high loads, but can be significant across the load curve. Data from the Standard Performance Evaluation Corporation's (SPEC) Power benchmark database illustrates this point (see Figure 1). Submissions from HPE indicates that there can be a 30-60 W power gap between two essentially identically configured (in terms of CPU, memory, disks and software stack) single-processor servers in 1U and 2U form factors.
Figure 1 Power consumption example: 1U and 2U systems (same configurations) 
At low utilization, the energy efficiency penalty becomes more pronounced in this example: the 1U system uses about 30% more power to perform the same job as the 2U system. Somewhat counterintuitively, the absolute gap is greater not at high utilization but at low loads, and reaches its greatest at idle at around 70 W. This indicates that modern high-performance 1U fans, often double deep units (e.g., 4056 type fans) with counter-rotating motors, use considerably more power even at their lowest settings than fans used for previous generation 1U systems. Other server configurations will behave differently, with some showing a much larger energy penalty for 1U at medium to high loads. In other cases, the power difference between 1U and 2U systems using the same computer hardware will be negligible due to specific choices made by the server manufacturers.
The underlying potential to reduce fan power, however, is there. As general guidance, by opting for 2U form factor chassis as standard rather than 1U, IT operators can cut system fan power substantially. Any difference (if any) will be dependent on configuration and fan speed settings.
An added benefit of adopting 2U form factor (or shifting from 2U to 3U/4U systems in the future) is that IT buyers can extend the practical viability of their air-cooling infrastructure even as IT thermal design power continues to rise. The larger chassis provides space for larger heat sinks and more airflow.
Today, 350-500 W CPUs paired with memory banks exceeding 100 W are becoming increasingly common, and the near future is expected to bring 600 W CPUs and 200 W memory banks, which will push future dual-processor servers closer to 2 kW running resource-intensive workloads. Air cooling will cope. The question is at what cost in terms of the power consumption of high-performance fans. Larger chassis will help.