Performant cooling requires a full-system approach to eliminate thermal bottlenecks. Extreme silicon TDPs and highly efficient cooling do not have to be mutually exclusive if the data center and chip vendors work together.
Performant cooling requires a full-system approach to eliminate thermal bottlenecks. Extreme silicon TDPs and highly efficient cooling do not have to be mutually exclusive if the data center and chip vendors work together.
Uptime’s 2025 cooling survey found that fewer respondents cited sustainability as a primary driver for direct liquid cooling (DLC). Gradual advancement of DLC plans may be refining operator understanding of its incentives.
Currently, the most straightforward way to support DLC loads in many data centers is to use existing air-cooling infrastructure combined with air-cooled CDUs.
Most operators do not trust AI-based systems to control equipment in the data center — this has implications for software products that are already available, as well as those in development.
In Northern Virginia and Ireland, simultaneous responses by data centers to fluctuations on the grid have come close to causing a blackout. Transmission system operators are responding with new requirements on large demand loads.
Against a backdrop of higher densities and the push toward liquid cooling, air remains the dominant choice for cooling IT hardware. As long as air cooling works, many see no reason to change — and more think it is viable at high densities.
Real-time computational fluid dynamics (CFD) analysis is gradually nearing reality, with GPUs now capable of producing high-fidelity simulations in under 10 minutes. However, many operators may be skeptical about why this is necessary.
Direct liquid cooling adoption remains slow, but rising rack densities and the cost of maintaining air cooling systems may drive change. Barriers to integration include a lack of industry standards and concerns about potential system failures.
The data center industry is on the cusp of the hyperscale AI supercomputing era, where systems will be more powerful and denser than the cutting-edge exascale systems of today. But will this transformation really materialize?
Liquid cooling contained within the server chassis lets operators cool high-density hardware without modifying existing infrastructure. However, this type of cooling has limitations in terms of performance and energy efficiency.
Direct liquid cooling challenges the common “line of demarcation” for responsibilities between facilities and IT teams. Operators lack a consensus on a single replacement model—and this fragmentation may persist for several years.
Many operators expect GPUs to be highly utilized, but examples of real-world deployments paint a different picture. Why are expensive compute resources being wasted — and what effect does this have on data center power consumption?
As new capacity is concentrated in super-sized data centers and legacy facilities continue to operate in large numbers, market trends become more difficult to read. This report looks at how size affects the age distribution of capacity.
AI training clusters can show rapid and large swings in power consumption. This behavior is likely driven by a combination of properties of both modern compute silicon and AI training software — and may be difficult to manage at scale.
Generative AI models brought about an influx of high-density cabinets. There has been much focus on how to best manage thermal issues, but the weight of power distribution equipment is a potentially overlooked concern.