At this week’s Gartner Data Center Conference in Las Vegas, research VP John Phelps asked users a series of questions about their data center facilities technologies. Here are some of the stats:
What type of data center fire suppression system do you have?
-Water sprinkler only 21%
-Water plus clean agent 50%
-Clean agent only 21%
-Don’t know 8%
“For those of you with water-only systems, if those sprinklers go off you will be in a world of hurt,” Phelps said. “The goal of a sprinkler system is to protect your building, a clean agent is supposed to protect your equipment.”
If you use a clean agent for data center fire suppression, what type?
-CO2 or inert gas 12%
-FM200 or other Fluoride based suppressant 64%
-Powered aerosol 1%
-Don’t know 11%
In which budget does data center power reside?
-Facilities but will probably move to IT 16%
-Budget has recently moved to IT 8%
-Always part of IT 12%
-Don’t know 6%
Do you measure PUE and if so, what is it?
-Do not measure PUE 49%
-Measure PUE, but don’t know what it is 11%
-Measure PUE and it is greater than 2.5 3%
-Measure PUE and it is between 2.0-2.5 9%
-Measure PUE and it is between 1.75-2 10%
-Measure PUE and it is between 1.5-1.75 12%
– Measure PUE and it is between 1.3-1.5 6%
What computer room cooling system do you use?
-Room cooling 46%
-In-row cooling 4%
-In-rack cooling 1%
-In-row and in-rack cooling 2%
-Room and in-rack cooling 20%
-Room and in-row cooling 15%
-Room, in-rack and in-row cooling10%
-Don’t know 2%
Do you use hot or cold aisle containment?
-Yes, cold aisle containment 22%
-Yes, hot aisle containment 15%
-Not currently, but plan to use cold aisle in 2 yeas 22%
-Not currently, but plan to use hot aisle in 2 years 13%
-No plans 23%
-Don’t know 5%
Do you make use of free cooling?
-Air side economizers 13%
-Water side economizers 12%
-Nope and no plans 48%
-No, but plan to install in two years 17%
-Don’t know 4%
Would you build a data center in five years without a raised floor?
-No way, you have to have a raised floor? 18%
-Might consider, but not convinced 35%
-Yes, considering it 35%
-Yes, I am going to build a data center without a raised floor 6%
Yes, already built a data center without a raised floor 6%
Phelps said by 2015 more than 50% of new data center builds will not use raised floor.
Categories include “Audacious Idea,” “Innovation in a Smaller Data Center,” and “Joint IT and Facilities Innovation.” Uptime is increasing this category field to 10 next year versus eight for past awards. 2010 winners included Microsoft, who’s “Audacious Idea” was a containerized data center in Chicago that reduced PUE from 1.6 to 1.15, reduced building costs by 33% and reduced time to deployment by 30%.
You’ve got a bit of time to get the applications in – they’re not due until February 18, 2011. You can learn more about the application process and the awards right here.
Here is a list of the top data center facilities stories and tips from SearchDataCenter.com in 2010:
NFPA drops data center EPO requirement
The National Fire Protection Association (NFPA) worked with data center industry leaders to come up with an alternative to the data center emergency power off EPO button.
Uptime Institute rolls out data center Operational Sustainability standard
The Uptime Institute unveiled a new data center standard, Data Center Operational Sustainability, which will help data center pros benchmark their team’s staffing, processes and more.
Getting started with hot or cold-aisle containment
Is containment right for you? Should you do hot-aisle containment or cold-aisle containment? Should you do it yourself or buy vendor products? What about fire code issues? How do you measure whether containment actually worked as hoped? Find out in this article.
Taking a look at Kyoto Cooling
The Kyoto wheel seems to have solved the challenges of air-side free cooling in the data center. Although it has been well proven since its introduction in 2007, the innovations that make the Kyoto wheel such an energy-efficient cooling device have not been well understood.
California faces data center consolidation challenges
When IT pros talk about California’s planned data center consolidation, they often use the word aggressive to describe the timeline. Indeed, some think the timeline is so aggressive that it could render the consolidation project impossible.
Biodiesel causing data center backup problems
Some states require the use of biodiesel over traditional petro-diesel — mandates that are designed to wean states off petrol dependence and move toward more environmentally sustainable fuels. But these alternatives pose risks. Derived from vegetable oils or animal fat instead of petroleum, biofuel blends can increase water and biological contaminants in fuel supplies.
Where to turn for data center design standards?
The Uptime Institute, BICSI and TIA all have produced data center design reference standards, and in this tip our expert parses out how each standard can work for your data center.
Will a transformerless UPS work for your data center?
Over the past five to seven years, the transformerless UPS has come to dominate the smaller three-phase (30 kVA and under) marketplace. These units are much smaller, lighter and lower in cost than the previous generation of transformer-based units. What are the tradeoffs?
Dr. Bob Sullivan on the future of data center cooling
In this Q&A, Dr. Bob, the father of hot-aisle cold-aisle, weighs in on containment, variable frequency drive fans, and other data center cooling issues.
Microsoft vs. Facebook: Data center container smack down
Microsoft data center mastermind Christian Belady debates now Facebook data center manager Chuck Goolsbee over containerized versus brick-and-mortar data center design.
Are we missing any top data center facilities stories? Respond via Twitter @DatacenterTT.
A murmuration of starlings brought down a police and fire department data center in Millville, New Jersey. Public officials suspect a pair of the invasive, gregarious birds were perched on the wires and touched wings, causing a shortage of a transformer in front of City Hall and knocking out power.
This is not the first time starlings have caused problems for U.S. businesses. The birds were introduced to the U.S. in the 1890s and are a costly invasive species, causing air traffic problems and agricultural destruction.
According to a recent International Data Corporation (IDC) report, sales of data center infrastructure equipment, such as PDUs, UPS and cooling units, jumped in revenue 6.9% and in shipments 3.6% over the prior quarter.
Does this opening of IT admins’ wallets mean that data centers are fully out from the throes of the recession? It’s more of a cautious optimism at this point, said one analyst.
“IDC is noticing conflicting signals from the market,” said Katherine Broderick, Senior Research Analyst, Servers and Data centers at IDC. “We’ve spoken with many enterprises that are consolidating their many data centers into fewer, larger, state-of-the-art facilities which could mean fewer brick and mortar buildings in the long term. However, data center wholesalers and the IT recovery point to data center growth. In addition, the move to modular-designed builds will mean a smoother future as data centers are built out in pods, containers or rooms rather than the larger, rarer, up-front investments that were made in the past.”
Broderick mentioned that the bulk of the investment is going toward traditional data center cooling methods but the research firm is hearing more about in-row and ambient cooling solutions being deployed in advanced data centers.
Winners on the vendor side include:
Emerson Network Power and its Liebert line led the cooling sector, with 47.9% of revenues in Q2 2010.
In the PDU market, APC by Schneider Electric was the leader with a 29.1% market share.
In data center racks, HP led the pack with 22.5% of revenue.
UPS-wise, Eaton was ahead of the competition with 30.2% of revenue.
IDC said a variety of methods were used in the research, including end-user surveys, vendor guidance, earnings statements from the infrastructure players and the IDC Quarterly Server Tracker.
A new data center cooling solution could give the Kyoto heat wheel a run for its money.
APC by Schneider Electric just announced the EcoBreeze, an evaporative and air-to-air heat exchange cooling solution, which has the ability to switch between air-to-air and indirect evaporative heat exchange, or inside mechanical refrigeration. Bob McFarlane, Principal, Shen Milsom & Wilke, mentions that this automatic switching between the two cooling techniques is a breakthrough.
“Designing the control systems to switch between, or to integrate, these two very different systems has been one of the most challenging aspects of air-side free cooling designs,” said McFarlane.
The Kyoto wheel, notes McFarlane, balances separating outside and inside air with accomplishing efficient heat exchange, with a low outside-to-inside transfer, but isn’t designed to effectively integrate/switch between evaporative and air-to-air – it still needs control systems for the mechanical cooling plant. Enter EcoBreeze.
“This new APC device combines a built-in cooling compressor of the highest available efficiency with a unique air-to-air heat exchanger using evaporative cooling into a total, self-contained package that also incorporates all the necessary control systems and totally separates outside from inside air,” said McFarlane. “It would appear to be a well thought-out solution to the air-side free cooling challenge.”
Because of this “all-in-one” solutions approach, McFarlane says that EcoBreeze could be a serious contender pitted against the Kyoto wheel, depending on its cost.
APC’s EcoBreeze is available modularly, in 50 kw units that can be grouped up to four or eight modules. The product will be available next year, and you can view full specs in the original press release.
More on air-side cooling:
Cooling directly at the server level is one of the latest energy-efficient trends being touted for the data center, but whether its chilly benefits will be reaped in the near future is another question.
Emerson Network Power just rolled out its Liebert XDS cooling system, which brings refrigerant-based cooling directly to the server level. The system boasts a standard IT rack with cold plate server cooling technology. Heat produced by the server is shuttled through heat risers to the server housing, through thermal interface material that lines the cover, and finally to a cooling plate, which uses refrigerant-filled tubing that absorbs the heat – air thus doesn’t have to be expelled from the rack into the data center.
Doug Washburn, an analyst with Forrester Research, says reducing cooling costs in the data center starts with basic best practices such as raising server inlet air temperature and cleaning up cabling under the raised floor. But when the opportunity arises for higher-impact cooling methods – like direct server cooling – it’s worth looking into.
“Direct cooling, or bringing cooling directly to the server where the heat is being created, like that offered by Liebert’s XDS, reduces cooling costs by removing server fans and data center facility cooling equipment,” Washburn said.
However, while cooling directly at the server level presents a host of benefits, it may not yet be at the top of IT managers’ Christmas lists.
“Despite the energy savings opportunity, the high majority of data center managers are primarily goaled on uptime and resiliency – not energy and cooling efficiencies,” Washburn was careful to point out. “While this is changing, it will take time for data center managers to get comfortable with modifications that they may perceive to negatively impact availability, such as removing server fans. Likewise, broad-scale removal of facility cooling equipment isn’t likely given that data center managers will continue to operate legacy IT systems and racks which can’t take advantage of server-level cooling.”
Washburn didn’t completely discount the merits of improving server cooling efficiency. He pointed out that the cost to power and cool a server over its lifespan may exceed the price tag of the equipment itself, and reducing cooling capacity can also avoid downtime. It’s just a matter of acclimation at this point for IT managers who may be hesitant to switch to the radical cooling technology and in the process switch their priorities from uptime to energy efficiency.
If you are considering server cooling efficiencies for your data center, you can check out specs of the XDS, available in 42U and 45U versions, at the Liebert website.
What’s new in raised floors for data center applications? I recently spoke with Daniel Kennedy, data center product manager at Jessup, MD-based Tate Access Floors about some of the new products the company rolled out at AFCOM, and the future of raised floors in increasingly dense data center environment.
A lot of large enterprise data centers have made the transition to slab floors, moving away from raised floors and perforated tiles. Overhead air distribution is in vogue, and our columnist Chuck Goolsbee laid out the case for slab floors back in ’07: Data center raised floor vs. solid debate.
But Kennedy said companies that are building data centers on slab floors typically have a very specific hardware deployment pattern in place. “They have a set model that will last ten years, and then they build another data center. They stay right on the bleeding edge,” Kennedy said. “But raised floor shines from a flexibility standpoint — you’re able to reconfigure a site on a long-term basis. You think you know what your data center will look like next year, but if you need it to last longer, you need to be able to make changes.
“If your IT density increases too much for under-floor cooling, the raised floor makes an excellent place to run chilled water. People couldn’t have predicted row based cooling systems or chilled water doors, but obviously we’ve got those in the market now. Who knows what the next 15 years will hold? If you’re trying to get the most life out of a building that you can, the raised floor will give you flexibility.”
But what about equipment density outweighing the floor rating? Kennedy said it’s a myth. The solid tiles are rated to 3,000 pounds, and a typical rack takes up two panels, so a blade chassis would need to go near 6,000 pounds before it stressed the raised floor.
Kennedy said Tate has never run into a user that hasn’t been able to put data center equipment on the raised floor because it was too heavy. He said it’s difficult to find a rack that weighs 3,000 pounds.
The three new Tate raised floor products include:
-A directional airflow grate that can angle airflow directly into the rack face, instead of blowing air 90-degrees straight up in a vertical column. Kennedy said around 50% of the air in a standard grate bypasses the rack altogether. “You spend a lot of money moving that air around, better to put it into the IT equipment if you can.”
-Tate also rolled out a damper system called SmartAire, which helps balance static pressure under the floor. In a heterogeneous data center, racks are going to require varying amounts of airflow, and that demand may shift throughout the day. The dampers can restrict airflow on lower density racks and increase airflow in a higher density spots, using on electronic sensors at the rack.
-PowerAire is the third product, a variable speed fan that throttles up and down based on temperatures at the face of the rack. The product is meant to deal with varying cooling requirements throughout the day, and to make sure that high density deployments, like blade servers, get the airflow they need.
Kennedy said these kinds of products have been commercial office space for thirty years. “Office cooling loads are variable, people go out to lunch and open windows,” he said. “You didn’t have that much variability on the data center side until more recently. The commercial office space went through this change in the 70s and 80s. The data center is still catching up from a technological standpoint.”
Check out our tip on keeping under the raised floor clean.
Are raised floors going away any time soon? Weigh in on Twitter by replying @DataCenterTT.
That kind-of-a-big website Facebook yesterday released some green strategies that it has been implementing in its data centers, and it probably couldn’t have come at a better time for the company.
This could be construed as damage control – or clever timing — on the social network giant’s part amid Greenpeace jumping down their throats when the company announced early this year the building of a Prineville, Ore., data center in a part of Oregon served by PacifiCorp coal power.
The report, posted on the network yesterday by Jay Park, Director of Data Center Engineering at Facebook, mentions that the company has learned a thing or two about data center energy efficiency with its rapidly expanding footprint, and because of it, its data centers are now reaping the benefits. Facebook claims that one of its data centers saves about 2.5 million kW hours annually, translating to an annual cost savings of $230,000. Environmental concerns taken into consideration, Facebook has reduced greenhouse gas emissions by 967 metric tons annually.
To garner the savings, using this particular data center as a model, Facebook said it’s improving airflow distribution by rolling out cold-aisle containment. In addition, the company is reducing cooling levels by reducing the unnecessary speeds it server fans were spinning at while still keeping temperatures within the recommended data center range. It also shut down 15 CRAH units as it discovered that they were not needed. Finally, Facebook has increased energy efficiency by raising the set point temperature of CRAH units while maintaining uniform temps in the cold aisles. The company also raised the chilled water temperature from 44 to 52F, which reduced the chilled water system load by 171 tons per hour.
While Facebook mentioned all of these energy efficiency strategies of this particular data center, it wasn’t clear if this was in reference to its Prineville data center. A recent interview with two Facebook execs, however, did tackle the coal concerns in its Oregon location, and one exec mentioned that the company would be working with PacifiCorp on becoming more reliant on renewable energy. It certainly is noteworthy, though, that Facebook released its data center energy efficiency strategy yesterday and didn’t use it as an opportunity to discuss renewable forms of energy and tackle the Greenpeace concerns head on. So for the moment, the company still seems to be betting on coal with its Oregon data center.
The full report on Facebook’s data center energy efficiency can be found here. Facebook has also set up a fan page of its new Prineville data center, which is set to open in the first quarter of 2011. Sound off @datacenterTT on whether you think Facebook is tackling energy efficiency effectively enough.
APC/Schneider Electric’s PowerChute Business Edition 9.0
APC rolled out new UPS software to provide energy reporting for IT equipment.
PowerChute 9.0 works with the APC Smart-UPS device series and calculates the cost of Smart-UPS’ power usage in kWh, enabling the user to see, via the LCD screen of the Smart-UPS, how the energy use of the protected equipment is affecting data center costs. In other words, on the UPS, the admin can see energy being consumed, the cost of the energy and CO2 being produced by the server equipment.
When PowerChute is used with the Smart-UPS, admins can also configure exact sequences of equipment shutdowns and restarts so the critical servers can be online longer, allowing for greater flexibility. IT managers can also power down non-critical equipment to conserve runtime.
PowerChute Business Edition 9.0 currently ships with the Smart-UPS 5kVA and you can also download it.
Eaton’s BladeUPS Preassambled System
Eaton Corporation just launched a UPS solution designed for consolidating standalone UPS units.
The BladeUPS Preassembled System combines UPSs to simplify management of the devices and power capacity planning, and can be ordered with two-to-six Eaton BladeUPS units installed. The preassembled system comes with UPS modules and batteries installed in 42U of space, and the necessary internal system wiring and communication cards. The BladeUPS modules within expand power protection from 12 kW to 60 kW in a 19-inch rack. In addition to energy and cooling cost savings it provides, Eaton says that since all UPS modules are preinstalled in the system enclosure, admins can save up to seven percent on installation costs compared to purchasing a bunch of separate components.
Shown below is the BladeUPS Preassembled System.