Posted by: Erin Watkins
data center cooling, data center design, IBM
The IT world has had a decades-long love triangle with air- and water-cooling. Air-cooling takes IT to the prom, but now water-cooling is holding up a boom box outside IT’s window to win it back.
IBM has made so many headlines with the “world’s fastest” supercomputer, Sequoia. But it also made waves by introducing a new commercial supercomputer, the SuperMUC. It boasts direct hot-water cooling and superb energy efficiency – using 40% less energy than air-cooling, says IBM.
The PR video from the Leibniz Supercomuting Centre says the SuperMUC’s cooling system is based on the human circulatory system – a fun medicine/technology crossover. Cold water goes in directly to the processors and carries hot water out to a heat exchanger, which then heats the facility.
Apparently, the facility housing SuperMUC has successfully eliminated CRACs from the equation and is saving Leibniz a million euros a year. IBM used to cool mainframes with water, but increased processor density and cheaper air conditioning drove data centers to adopt air-cooling. Now that energy costs are on the rise and there’s an emphasis on going green, companies are once again looking to liquids to cool their machines. Plus, according to Robert McFarlane, Principal at Shen Milsom and Wilke, it’s hard to argue with the fact that “water is approximately 3,500 times more efficient than air.”
The hurdle for many facilities is infrastructure. Liquids require pipes. Even SuperMUC wouldn’t be able to use that capillary-inspired cooling system without the supporting infrastructure.
Internap, a data center hosting facility in various U.S. cities, has built the newest expansions of their facility with underfloor piping infrastructure to get glycol directly to servers. Older parts of the facility use hot/cold aisle air-cooling with the underfloor space used only for air.
Then there’s Google, which built a waste water processing facility to provide water for cooling, thus eliminating some of the strain on the community.
But both of those examples are new builds. It will be interesting to see how invasive and disruptive adding water-cooling infrastructure would be to an existing data center.
Do you think more facilities going to pony up the infrastructure cost and switch (back?) to water-cooling, or is the relative comfort of air-cooling enough to keep data centers happy?