Posted by: Mark Fontecchio
Container Data Center, data center cooling
A couple weeks ago I got the chance to spend the morning with Paul Brenner, who works in the high-performance computing department at the University of Notre Dame. Brenner is spearheading a project to build a containerized data center next to a local municipal greenhouse so that, during winter months, the heat from the servers can be piped into the greenhouse to warm it up. Check out the Notre Dame greenhouse data center story (there’s a cool video).
Another thing I learned from Brenner when hanging out with him is that he is an engineering officer in the U.S. Air Force Reserves, and actually just returned from an overseas deployment in Afghanistan a few weeks ago. While there, Brenner helped build data centers.
Obviously it’s not an ideal place, and Brenner had to do a lot of improvising. A few things complicated his mission. First, with it being the military, so much information is siloed, with select people able to access it. So not only do different branches of the military want their own data centers and their own servers, but divisions within each branch want close control of their IT assets. So a lot of data centers there are hodgepodge, small, and consist of a rack here or a rack there.
Brenner mentioned how some of the major IT vendors out there, such as IBM and HP and Sun Microsystems, have tried pitching their containerized data centers as a suitable option for military operations. But Brenner said that even in ideal conditions, deployment time is measured in months. In many cases, Brenner doesn’t have that much time.
So he made do. Oftentimes he would take a bunch of household air-conditioning units and daisy-chain them together, which he said actually led to a good deal of cooling redundancy. It’s all about adjusting to conditions, and when your overseas serving your country in a barren desert land, you do whatever you can to keep the computers running.