In Northern Virginia and Ireland, simultaneous responses by data centers to fluctuations on the grid have come close to causing a blackout. Transmission system operators are responding with new requirements on large demand loads.
In Northern Virginia and Ireland, simultaneous responses by data centers to fluctuations on the grid have come close to causing a blackout. Transmission system operators are responding with new requirements on large demand loads.
Against a backdrop of higher densities and the push toward liquid cooling, air remains the dominant choice for cooling IT hardware. As long as air cooling works, many see no reason to change — and more think it is viable at high densities.
Real-time computational fluid dynamics (CFD) analysis is gradually nearing reality, with GPUs now capable of producing high-fidelity simulations in under 10 minutes. However, many operators may be skeptical about why this is necessary.
Direct liquid cooling adoption remains slow, but rising rack densities and the cost of maintaining air cooling systems may drive change. Barriers to integration include a lack of industry standards and concerns about potential system failures.
The data center industry is on the cusp of the hyperscale AI supercomputing era, where systems will be more powerful and denser than the cutting-edge exascale systems of today. But will this transformation really materialize?
Liquid cooling contained within the server chassis lets operators cool high-density hardware without modifying existing infrastructure. However, this type of cooling has limitations in terms of performance and energy efficiency.
Direct liquid cooling challenges the common “line of demarcation” for responsibilities between facilities and IT teams. Operators lack a consensus on a single replacement model—and this fragmentation may persist for several years.
Many operators expect GPUs to be highly utilized, but examples of real-world deployments paint a different picture. Why are expensive compute resources being wasted — and what effect does this have on data center power consumption?
As new capacity is concentrated in super-sized data centers and legacy facilities continue to operate in large numbers, market trends become more difficult to read. This report looks at how size affects the age distribution of capacity.
AI training clusters can show rapid and large swings in power consumption. This behavior is likely driven by a combination of properties of both modern compute silicon and AI training software — and may be difficult to manage at scale.
Generative AI models brought about an influx of high-density cabinets. There has been much focus on how to best manage thermal issues, but the weight of power distribution equipment is a potentially overlooked concern.
Densification is — once again — high on the agenda, with runaway expectations largely due to compute power requirements of generative AI workloads. Will this time be different? Uptime’s 2024 global survey of data center managers offers some clues.
Uptime Institute's 2024 Data Center Maintenance Survey benchmarks maintenance practices among data center operators and sets to help organizations improve performance when it comes to maintenance utilizing in-house employees, third-parties, and/or…
Adoption of direct liquid cooling (DLC) continues to grow slowly, according to recent Uptime research. However, more operators are considering it for future use due to mounting thermal management and sustainability challenges.
Results from the Uptime Institute Supply Chain Survey 2023 reveal that data center owners and operators are continuing to suffer supply chain delays, but these appear to be less frequent and severe than in 2022.