The history of immersion cooling in IT goes back to 1985 with the debut of the Cray-2 supercomputer, which used a two-phase liquid made by 3M. For the better part of the four decades since then, immersion cooling has been largely absent from data centers until it started making its comeback in the 2010s. Today, the vast majority of immersion cooling is taken up by cryptocurrency mining operations (anecdotally several hundred megawatts in total), followed by academic and technical high-performance computing. However, it is yet to break into the mainstream of enterprise and colocation data center facilities, according to a recent Uptime Institute survey (see Uptime Institute Cooling Systems Survey 2024: Direct liquid cooling).
This is likely to change in the coming years. A rapid escalation of server silicon power (dictated by semiconductor physics and a pursuit of compute performance) and the accompanying rise in rack densities have recently made more enterprise operators and colocation providers consider immersion cooling for their densified compute systems. This almost exclusively means single-phase immersion, as mainstream interest in two-phase fluids remains nascent for now. This is largely due to 3M’s exit from the segment due to environmental and health concerns around per- and polyfluoroalkyl substances (PFAS).
Apply for a four-week evaluation of Uptime Intelligence; the leading source of research, insight and data-driven analysis focused on digital infrastructure.
Already have access? Log in here