There are different tiers of affordability out there, however. The cheapest I’ve seen is CoolSim, which at its most basic level, costs $7,500 a year. The other two major data center CFD vendors — TileFlow and Future Facilities’ 6Sigma — go for about twice and four times that, respectively.
Another unique aspect of CoolSim is that it’s a client-server, more services-based model. You have a basic desktop application where you build your data center model. Then you export that model to a file and send the file to CoolSim, which crunches the numbers and sends back a report. With TileFlow and Future Facilities, you do the crunching in-house.
“We deliver it as a SaaS model,” said Paul Bemis, president of the Concord, N.H.-based company. “Since it’s client server, you only pay for what you need.”
But the most important aspect of CFD modeling is accuracy, and with a product like CoolSim, there’s a question around whether you get what you pay for. Pete Sacco, president of engineering and consulting firm PTS Data Center Solutions, has said that cheaper CFD modeling tools such as CoolSim simply aren’t as accurate as the Future Facilities product, which his company uses.
Bemis acknowledges that the Future Facilities software encompasses more detailed results, but says that a lot of data centers don’t really need or want that much detail. And obviously he disputes claims that CoolSim results could be inaccurate. But as with any product out there, it’s all about caveat emptor. If you do your due diligence, you can quickly find out for yourself what is the right product for the right price.]]>
Hot/cold aisle design is the concept off aligning IT equipment racks in rows so that the fronts face the fronts and the backs face the back. That way, cool air can come up through a raised floor or from overhead into the cold aisle, enter the front of the servers to cool them, and then exhaust hot air in the hot aisle. It helps prevent the mixing of hot and cold air, which leads to wasted cooling.
But letting that cold and hot air roam free in the hot and cold aisles leads to unpredictability, said Carl Cottuli, a VP of product development for Wright Line. Like a toddler who needs discipline, air in a data center needs better direction so it doesn’t go off and do whatever it feels like. That’s chaos air distribution.
“The real problem all along was not the arrangement of racks, but reliance on chaos air distribution,” Cottuli said at the AFCOM Data Center World conference in Las Vegas this week.
Cottuli said the solution is to contain your hot or cold aisles to guide the air directly to the servers or out of the room. If you can contain both, all the better.
Now, Cottuli does have a specific interest in this. Wright Line builds and sells server cabinets, ceiling plenums and cold aisle containment products that fight this so-called chaos air distribution. But the idea of hot/cold aisle containment is not Wright Line’s alone — a lot of vendors are selling aisle containment products, and a lot of users have bought these products or built their own custom aisle containment systems.]]>
Assume a data center has 100 CRAC units, 80 of them are needed to meet the loads, and 20 of them are needed for redundancy. This amounts to 25% redundancy, which is very typical for most data centers. Let’s also assume that the load is constant from now till forever (meaning that the part load conditions are not considered, i.e. they are history).
Case 1: 80 units running at 100% speed consume 80/100 = 80% of the possible fan energy use.
Case 2: 100 units running at 80% speed consume 80/100 x 80/100 x 80/100 = 51% of the possible fan energy use.
Compared to normal operation, using all of the available redundant CRAC units at variable speed (regardless of whether that variable speed is achieved by EC motors or VFD’s) consumes 100%-(51%/80%) = 36% less fan energy than running the load-required complement of CRAC units at full speed. That’s not a small amount of energy!]]>
ASHRAE TC 9.9 now recommends data center temperatures as high as about 80 degrees Fahrenheit. But that is to be measured at the location of the server inlet. How about on the other side, in the hot aisle? The difference between cold and hot aisles, often referred to as Delta T or just ΔT, can be as much as 50 degrees Fahrenheit. Which means hot aisle temperatures could approach 130 degrees Fahrenheit, and if the equipment is live, that means 130-degree air blowing in your face. Not exactly ideal working conditions.
So what to do, ASHRAE members wondered? Some think that server manufacturers need to start redesigning their boxes so they can be accessed and maintained from the front in addition to, or instead of, from the back. That way data center staff could work in the more-tolerable cold aisle where heat stroke is less likely.
Another option is simply to pull up a tile where you’re going to work in the hot aisle, and replace it with a perforated tile. That way you can get a nice chilly gust of cold air blasting from your feet to counteract the furnace blowing in your face. Sure, putting perforated tiles in the hot aisle is considered a severe no-no in well-designed hot/cold aisle data center configurations.
But if it’s temporary, and it can prevent the need to have an IV bag of fluids on site just in case of severe dehydration and overheating of employees, well, then it might be worth it.
Oh, and if you’re not working in a raised-floor environment, you might be out of luck. Maybe you can invest in a couple oscillating fans.]]>
There has been a lot of talk recently about raising data center temperatures to improve energy efficiency, as the air conditioners don’t have to work as hard to cool the room. ASHRAE TC 9.9 recently changed its recommended upper data center temperature from 77 degrees Fahrenheit (25 degrees Celsius) to 80.6 degrees Fahrenheit (27 degrees Celsius). Munther Salim, a mechanical engineer at HP EYP Mission Critical Facilities, said raising the set points in CRAC units is the “number one thing you can do to save money.”
Google is raising data center temperatures. So is Microsoft and Intel. But Michael Patterson, a thermal engineer at Intel, warned that raising the data center temperature could have an effect on “acoustical noise levels.”
“Servers with (variable frequency drive) fans on servers — the increase in power comes mostly from the increase in fan power after 25 degrees Celsius,” he said. “Servers in 27 degrees Celsius may have higher acoustics due to higher fan speed.”
Patterson showed the following graph:
As you can see, fan power (the orange-reddish line) rises exponentially after about 25 degrees Celsius, which happens because of the increase in fan speed to keep the server components cool enough in a warmer environment. According to an ASHRAE document on the extended environmental envelope, “it is not unreasonable to expect to see increases in the range of 3-5″ decibels if the ambient temperature increases from 25 to 27 degrees Celsius.
“Data center managers and owners should therefore weigh the trade-offs between the potential energy efficiencies with the proposed new operating environment and the potential increases in noise levels,” the document states.
Are there any other solutions? Some suggest reversing the trend of server manufacturers to miniaturize components. Just a few years ago, 1U servers might have one single-core processor. Now they might have multiple quad-core chips. That leads toward having to dissipate more heat in a smaller space. If servers were made bigger, the fans wouldn’t have to work as hard to do so much in such a tight area.
But that might not be feasible for some users. According to a SearchDataCenter.com survey earlier this year, 32% of users say that lack of space is most limiting their data center growth. Making servers bigger won’t help that.]]>
I’ll be on hand to report from the data center-focused sessions. The ASHRAE meeting is typically a great place to see what data center cooling experts are talking about, and they often publish technical papers here for the first time. This year industry leaders such as Roger Schmidt from IBM, Christian Belady from Microsoft and William Tschudi from the Lawrence Berkeley National Laboratory will be speaking at the meeting. Here’s some of what ASHRAE TC 9.9 will be presenting:
The companies are not merging, but have penned a deal to combine their forces in a push to become the leader in data center prototyping and design. The biggest part of the deal is probably the integration of Future Facilities’ 6SigmaDC computational fluid dynamics software with Aperture’s Vista software. It will allow the CFD airflow analyses done by Future Facilities to take advantage of the large inventory database of IT equipment that Aperture has.
Liebert is still a big part of the deal, however, as its base of customers is larger than either of the other two companies by far.
The full press release spells out a few more details. The companies announced the deal in the midst of AFCOM’s Data Center World conference in Orlando. Check out all of our Data Center World coverage.]]>
As you can see, the velocity was such that the air coming out the back of server cabinets in 2003 was just a few feet. Last year it was more than 12 feet, and Future Facilities estimates that it will be around 25 feet in another five years. Not only that, but hot air will be coming out (as shown by the red) 12 feet in five years.
“In 2003, you didn’t have to worry about airflow in the data center,” said Ikemoto. “Now you have to watch out for the air going into the back of the cabinet across the aisle.”
The result? You might have to make your hot aisles bigger than what what they once were. And that cramps your data center’s space.]]>