UII UPDATE 347 | MARCH 2025
Intelligence Update

Density choices for AI training are increasingly complex

When it comes to generative AI models, the belief that bigger is better is an enduring one — shaping a wide array of high-stakes decisions from corporate investments to energy and permitting policy to technology export controls.

A defining technical feature in the pursuit of ever-bigger models is denser compute hardware. This makes sense: as problems demand more compute, the incentive grows to compress the footprint in order to maximize performance. When many processors (and their memory banks) are tightly coupled using latency-optimized interconnect topology, control messages and data pass through fewer hops (switching silicon) and travel shorter distances. All those nanoseconds add up — and when hardware costs millions of dollars, the performance degradation due to added latency matters.

Request an evaluation to view this report

Apply for a four-week evaluation of Uptime Intelligence; the leading source of research, insight and data-driven analysis focused on digital infrastructure.

Posting comments is not available for Network Guests