Posted by: Mark Fontecchio
data center cooling, DataCenter
A common metric for determining cooling capacity in data centers is kw/rack, or kilowatts per rack, and cfm/kw, or cubic feet per minute per kilowatt. By figuring out how much power your server cabinet is drawing, the logic goes that you’ll then know how much cooling you need.
From looking around, it seems the consensus is that you need anywhere from 80 to 160 cfm per kw of power load. First off, that’s a wide range. The top end of it is twice the amount of cooling as the bottom end of the range. Figuring out just the right amount — so you’re not putting equipment at risk of overheating or overprovisioning cooling resources and therefore wasting money — is a tricky task. Especially if you’ve got a whole row of racks, and a bunch of these rows, and their heat load and cooling needs all affect each other.
But still, let’s keep it simple. Let’s say you have only one rack of servers, and you need to determine how much cooling the rack requires. The answer is that it depends on how you align your IT equipment. Take a look:
This graph comes from Future Facilities, a computational fluid dynamics (CFD) software company that analyzes airflow patterns in data centers. In the picture on the left, blade servers are positioned above 3 1U servers. On the right, the blades are on the bottom. Same power load (kw), same cooling resources (cfm). As you can see, the blades run a lot hotter when they’re on top — about 81 degrees Fahrenheit compared to about 65 degrees.
“The two cabinet configurations contain the same units of equipment. The only difference is stacking order,” Sherman Ikemoto, the general manager of Future Facilities for North America, wrote in an email, adding later that in this case, kw/rack “is not enough information to know if the cooling requirement for the equipment will be met.”
So now is the time to just throw your hands in the air and shrug your shoulders, right? Either that, or overprovision cooling, because if your power bill is a little more expensive, you can deal with that heat from management. But if the equipment shuts down because there isn’t enough cooling, that’s your job.
But there are ways to figure it out. You could hire a consultant to use the fancy-shmancy software that Future Facilities sells. Most of you would hire the consultant to do this, because using the software yourself is complicated, and besides that, it’s very expensive. Or you could put temperature sensors along the back of your racks to measure how hot it is at different levels of your rack, and adjust your server stacking configuration accordingly for new server cabinets. I’m sure there are other ways to do this, and if you have any ideas, leave a comment below.
Ikemoto said it’s all about thinking of the entire data center — and not just individual cabinets — as the ecosystem to focus on when it comes to power and cooling.
“To me, it’s an expression of Moore’s Law in a form that is relevant to the data center level (as opposed to chip level) and thus expresses a connection across the entire supply chain from the chip to the room,” he wrote.