UII UPDATE 309 | DECEMBER 2024
Intelligence Update

Most AI models will be trained in the cloud

The rapid rise of generative AI has changed the landscape of AI infrastructure requirements. Training generative AI models, particularly large language models (LLMs), requires massive processing power, primarily through GPU server clusters. GPUs are essential in this task because they accelerate the processing of matrix multiplication calculations that underpin the neural network architectures behind generative AI (see How generative AI learns and creates using GPUs).

GPU clusters can be difficult to procure, expensive to purchase and complex to implement. Cloud providers offer access to GPU resources and AI development platforms on a pay-as-you-go basis.

Request an evaluation to view this report

Apply for a four-week evaluation of Uptime Intelligence; the leading source of research, insight and data-driven analysis focused on digital infrastructure.