The growing enthusiasm for generative AI is fueled by frequent large language model (LLM) releases by OpenAI and its closely watched rivals, such as Google, Anthropic and Meta. In this competition, each new flagship model must be bigger than the last: using more training data, featuring more model parameters, and requiring exorbitant amounts of compute capacity for both training and inference.
This evolution has elevated expectations for power density and capacity among data center operators that are planning to support the adoption of generative AI in the enterprise. The consensus is that infrastructure requirements of AI will keep escalating; and that the only way to meet the demand for AI is to continue deploying ever more power-hungry hardware. This trend is expected to force the adoption of direct liquid cooling, high-capacity busways and, potentially, medium voltage power distribution.
Apply for a four-week evaluation of Uptime Intelligence; the leading source of research, insight and data-driven analysis focused on digital infrastructure.
Already have access? Log in here