Current geopolitical tensions are eroding some European organizations’ confidence in the security of hyperscalers; however, moving away from them entirely is not practically feasible.
Current geopolitical tensions are eroding some European organizations’ confidence in the security of hyperscalers; however, moving away from them entirely is not practically feasible.
Data center operators are increasingly aware that their operational technology systems are vulnerable to cyberattacks. Recent incident reports show a rise in ransomware attacks, which pose significant risks to data centers
Training large transformer models is different from all other workloads — data center operators need to reconsider their approach to both capacity planning and safety margins across their infrastructure.
The EU Commission has used a non-representative dataset to propose minimum performance standards (MPS) for PUE, WUE and REF, effective in 2030. The mandate risks the rebuilding of 30% to 40% of data center space in four years.
Although the data center industry is mostly aware of the EU’s Energy Efficiency Directive, Uptime Intelligence’s research suggests widespread confusion on specific components of the directive — and its implementation status.
A report by Uptime's Sustainability and Energy Research Director Jay Dietrich merits close attention; it outlines a way to calculate data center IT work relative to energy consumption. The work is supported by Uptime Institute and The Green Grid.
Direct liquid cooling challenges the common “line of demarcation” for responsibilities between facilities and IT teams. Operators lack a consensus on a single replacement model—and this fragmentation may persist for several years.
The European Commission has proposed a data center rating scheme/label that is broadly scoped and highly detailed. The label needs to focus on a few key indicators including three meaningful IT infrastructure metrics.
Although the share of processing handled by the corporate or enterprise sector has declined over the years, it has never disappeared. But there are signs that it may reclaim a more central role.
For the past 15 years, the case for moving workloads out of enterprise data centers and into the cloud and colocation has been strong. Power availability and demand for high-density capacity may change that.
Human error is an increasingly exploited weakness by cyberattackers, leading to data center security breaches and greater risk for enterprises and operators.
To meet the demands of unprecedented rack power densities, driven by AI workloads, data center cooling systems need to evolve and accommodate a growing mix of air and liquid cooling technologies.
Today, GPU designers pursue outright performance over power efficiency. This is a challenge for inference workloads that prize efficient token generation. GPU power management features can help, but require more attention.
The past year warrants a revision of generative AI power estimates, as GPU shipments have skyrocketed, despite some offsetting factors. However, uncertainty remains high, with no clarity on the viability of these spending levels.
As AI workloads surge, managing cloud costs is becoming more vital than ever, requiring organizations to balance scalability with cost control. This is crucial to prevent runaway spend and ensure AI projects remain viable and profitable.