Power grids are under stress, struggling to meet future demand and increasingly prone to outages. More utilities will expect data centers to contribute power — and be more flexible in their use of power.
Power grids are under stress, struggling to meet future demand and increasingly prone to outages. More utilities will expect data centers to contribute power — and be more flexible in their use of power.
Cloud providers need to win AI use cases in their early stages of development. If they fail to attract customers, their AI applications may be locked-in to rival platforms and harder to move, which can have serious repercussions.
Nvidia’s dominant position in the AI hardware market may be steering data center design in the wrong direction. This dominance will be harder to sustain as enterprises begin to understand AI and opt for cheaper, simpler hardware.
As operators expand their use of hybrid IT and cloud, optimizing the IT could help alleviate concerns over availability and efficiency. This report is part two of a four-part series on data center management software.
Visibility into costs remains a top priority for enterprises that are consuming cloud services. Improving the tagging of workloads and resources may help them to spot, and curb, rising costs.
The cost and complexity of deploying large-scale GPU clusters for generative AI training will drive many enterprises to the public cloud. Most enterprises will use pre-trained foundation models, to reduce computational overheads.
Generative AI is not only accelerating the adoption of liquid cooling but also its technical evolution. Partly due to runaway silicon thermal power levels, this has led to a convergence in technical development across vendors.
Not all generative AI applications will require large and dense infrastructure footprints. This complicates AI power consumption projections and data center planning.
Enterprises have much enthusiasm for AI, interviews and workshops by Uptime Intelligence suggest, but this is tempered by caution. Most hope to avoid disruptive, expensive or careless investments.
The UNEP U4E initiative has proposed guidelines for data center design and operation and server and storage product efficiency requirements. These have far-reaching implications for data center operations in developing countries.
While the aim of FinOps is to manage just the cloud costs, technology business management seeks to aggregate all costs of IT, including data centers, servers, software and labor, to identify savings and manage return on investment.
New augmented reality and virtual reality technologies can provide effective training capabilities for data center staff but are not yet a complete substitute for in-person training.
Many organizations still do not tap into the potential power efficiency gains hidden in servers. Without operational focus on extracting those, future server platforms may bring marginal, if any, energy performance improvements.
The number of proposals for new hyperscale-size data centers have reached new heights in 2024. Those that are built will require huge investment and resources — but many proposals will fail to move forward.
Enterprises have various options on how and where to deploy their AI training and inference workloads. This report explains how these different options balance cost, complexity and customization.