Training large transformer models is different from all other workloads — data center operators need to reconsider their approach to both capacity planning and safety margins across their infrastructure.
Training large transformer models is different from all other workloads — data center operators need to reconsider their approach to both capacity planning and safety margins across their infrastructure.
The EU Commission has used a non-representative dataset to propose MPS values for PUE, WUE and REF, due to be effective in 2030. They risk mandating the rebuilding of 30% to 40% of operating data center space within an unrealistic four-year timeframe
Although the data center industry is mostly aware of the EU’s Energy Efficiency Directive, Uptime Intelligence’s research suggests widespread confusion on specific components of the directive — and its implementation status.
A report by Uptime's Sustainability and Energy Research Director Jay Dietrich merits close attention; it outlines a way to calculate data center IT work relative to energy consumption. The work is supported by Uptime Institute and The Green Grid.
Direct liquid cooling challenges the common “line of demarcation” for responsibilities between facilities and IT teams. Operators lack a consensus on a single replacement model—and this fragmentation may persist for several years.
The European Commission has proposed a data center rating scheme/label that is broadly scoped and highly detailed. The label needs to focus on a few key indicators including three meaningful IT infrastructure metrics.
Although the share of processing handled by the corporate or enterprise sector has declined over the years, it has never disappeared. But there are signs that it may reclaim a more central role.
For the past 15 years, the case for moving workloads out of enterprise data centers and into the cloud and colocation has been strong. Power availability and demand for high-density capacity may change that.
Human error is an increasingly exploited weakness by cyberattackers, leading to data center security breaches and greater risk for enterprises and operators.
To meet the demands of unprecedented rack power densities, driven by AI workloads, data center cooling systems need to evolve and accommodate a growing mix of air and liquid cooling technologies.
Today, GPU designers pursue outright performance over power efficiency. This is a challenge for inference workloads that prize efficient token generation. GPU power management features can help, but require more attention.
The past year warrants a revision of generative AI power estimates, as GPU shipments have skyrocketed, despite some offsetting factors. However, uncertainty remains high, with no clarity on the viability of these spending levels.
As AI workloads surge, managing cloud costs is becoming more vital than ever, requiring organizations to balance scalability with cost control. This is crucial to prevent runaway spend and ensure AI projects remain viable and profitable.
The trend towards regulating and controlling data center energy use, efficiency and sustainability continues to grow globally, with the appearance of utility rate management regulations and the propagation policies influenced by the EU’s EED.
Tensions between team members of different ranks or departments can inhibit effective communication in a data center, putting uptime at risk. This can be avoided by adopting proven communication protocols from other mission-critical industries.