UII UPDATE 482 | MARCH 2026
If there is a surprising element in the sale of CoolIT Systems, a pioneer in direct liquid cooling (DLC) for servers, is that it did not happen sooner. The acquirer is Ecolab, a US-based industrial conglomerate, which has agreed to pay $4.75 billion in cash for the Canadian DLC specialist. Ecolab's bet is that its existing relationships in the data center industry will put CoolIT's products at an advantage by offering complete life cycle services around them. The sellers are majority stake holder KKR, a major private equity player, and Mubadala, Abu Dhabi's sovereign investment vehicle. KKR and Mubadala invested in CoolIT Systems in 2023 at a reported valuation of under $300m at the time.
The acquisition of CoolIT is the second largest deal in the liquid cooling space, behind Eaton's purchase of Boyd Thermal in November 2025, valued at $9.5 billion (see Investments signal a heated liquid cooling race). A bout of activity in data center thermal management saw 12 transactions (major investments, mergers, acquisitions) in 2025 on the strength of global spending on AI compute, with another four in the first quarter of 2026, including this latest deal.
CoolIT is a major supplier of cold plates and cooling distribution units (CDUs), and its products are often integrated and sold by major server vendors. CoolIT products are also historically well-established in high-performance computing systems, including the world's largest scientific supercomputers, as well as in Nvidia's own rack-scale system designs.
Key notes on the transaction:
- Still no sign of a slowdown. The price offered for CoolIT indicates that there is no sign of a slowing down in demand for high-density data center infrastructure, even if the durability of the current AI investment supercycle remains an outstanding question. At a forward-looking revenue multiple of 8.6 paid by Ecolab, the valuation is even more ambitious than the 5.6 revenue multiple Eaton paid for Boyd Thermal. Notably, Nvidia guides the data center market to a much higher pace of GPU shipments for 2026 and 2027 than seen in 2025, most of which will be using cold plates due to the high thermal power and low chassis profile of GPU compute nodes.
- PG25 water cold plate systems are the benchmark to beat. Despite trade-offs in lowered thermal performance and higher pumping energy due to 25% propylene-glycol content (typical), closed-loop cooling systems with PG25 coolants dominate DLC shipments. This is due to their high technical and commercial maturity, particularly in terms of material compatibility and operational practices around monitoring and maintaining coolant quality. Alternative approaches, such as dielectric and refrigerant-based cold plates, as well as immersion systems, will be expected to offer technical or business advantages over water cold plate systems.
- Expect water cold plate systems to grow in scale. Traditionally, cold plates only targeted high-power components such as CPUs, GPUs or other compute chips in the server, capturing anywhere between 60-80% of the heat load. As GPU-servers become denser, cold plate heat load coverage increases. Nvidia's next-generation rack-scale systems will introduce near 100% heat load capture, which has already been demonstrated in some supercomputers. This will shift a higher share of capacity, thus more spending, into liquid cooling systems in data centers hosting extreme density compute systems (currently up to 150 kW, soon above 200 kW per compute rack).
- Expect water cold plate systems to keep up with thermal design power (TDP) values. For now, there is no end in sight in the escalation of the TDP of high-performance compute chips. Server CPUs are currently at 500 W with a path toward 1 kW per multi-chip package (on-chip power delivery networks are already capable of that), and next-generation GPUs from AMD and Nvidia will surpass 2 kW per module. Water cold plates will be able to keep pace. Higher power alone will not challenge their current market dominance: for now, the limiting factor is heat transfer from the chips to the surface of the package, not from the package into the cold plate or coolant. Future chip designs with extreme heat flux (density of thermal power) in hot spots on the silicon may change that view, but this is unlikely within the next five years.
- Maintenance services will become large, highly complex projects. As the size and complexity of DLC systems grow with the large-scale installation of dense compute, concomitant risks increase. Monitoring, maintenance and remediation of the fluid networks, both in facility and technology loops, will require more specialist skills to minimize any downtime and damage.
With DLC, any interruption to cooling capacity, including coolant quality degradation or leaks, will have a near immediate effect on IT, with the potential to disrupt several megawatts of load. Even when it comes to facility loops, the ability for IT to tolerate the loss of chilled water flow will be measured in tens of seconds at most, not minutes. This increases the emphasis on monitoring and maintenance and encourages the use of more thermal energy storage — typically in the form of large water tanks. Future fluid networks for cold plate systems may also increase the size of coolant reservoirs for thermal buffering to increase temperature stability and resilience.
Consequently, data centers will require more specialists in hydraulics and valve controls to keep up with the requirements of high-density IT. Alternative DLC techniques could potentially demonstrate a tangible advantage to infrastructure operations in this regard, even if they are unable to match raw thermal performance. Until then, water-based technology fluid networks will continue to grow — as will the associated maintenance work to support them.
Table 1 is an overview of transactions in the data center cooling segment since November 2024 (in reverse chronological order). The Ecolab-CoolIT tie up is the fifth billion-dollar deal in the past 12 months.
Table 1 Cooling M&A and investment activity (November 2024 to March 2026)

About the Author
Over the past 15 years, Daniel has covered the business and technology of enterprise IT and infrastructure in various roles, including industry analyst and advisor. His research includes sustainability, operations, and energy efficiency within the data center, on topics like emerging battery technologies, thermal operation guidelines, and processor chip technology.
dbizo@uptimeinstitute.com