The past year warrants a revision of generative AI power estimates, as GPU shipments have skyrocketed, despite some offsetting factors. However, uncertainty remains high, with no clarity on the viability of these spending levels.
As AI workloads surge, managing cloud costs is becoming more vital than ever, requiring organizations to balance scalability with cost control. This is crucial to prevent runaway spend and ensure AI projects remain viable and profitable.
This briefing report identifies and describes several de facto standards and laws used in the field of data center sustainability and efficiency (for convenience, we use the term “standards” for all).
Data center sustainability standards grow globally
Error-proof emergency communications for facility teams
The two sides of a sustainability strategy
REPLAY | Inside the Uptime Network: Exclusive Insights and What's Next for Data Center Leaders
REPLAY | Annual Data Center Outage Analysis 2025
REPLAY | European Cybersecurity Regulation and its Impact on Digital Infrastructures
Cooling Options in a Data Center White Spaces
CFM/kVA Question
Benchmarking - last call for EUL, ramping up "next'
Water is local: generalities do not apply
Density choices for AI training are increasingly complex
AI load and chiller systems: key considerations
Are data centers to blame for power quality issues?
Small modular reactors: building critical mass
The DeepSeek paradox: more efficiency, more infrastructure?
Calculating work capacity for server and storage products
OPINION | Data centers weather grid failures — but utilities want change
Cloud: when high availability hurts sustainability
Annual outage analysis 2025
Data center AI strategies are mixed in early 2025
DeepSeek bans: implications for data centers
Hardware for AI: options and directions
Uncertainty and doubt as US changes GPU export rules again
In the US, data center pushback is all about power
GPU utilization is a confusing metric