Data center facilities pro


February 4, 2009  2:33 PM

Tidbits from The Green Grid Technical Forum

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The Green Grid Technical Forum continues today in San Jose. Yesterday was for members only; today it’s open to anyone. I spoke yesterday to two board members, John  Tuccillo from APC and Mark Monroe from Sun Microsystems. They updated me on some of the things going on with The Green Grid, which was formed a couple years ago to address the growing issue of data center power consumption. Here’s a rundown on a few of the details.

Data center design guide

Starting this year, The Green Grid will start putting together a data center design guide, something it’s calling Data Center 2.0. It’s aiming to release the first draft of the design guide in February 2010, a year from now. The guide will purportedly be a top-to-bottom look at how to build an efficient data center, looking at everything from design to construction to operations.

Data center Second Life

Tuccillo and Monroe also told me plans for The Green Grid Academy, which the group sees as a way to more easily disseminate information, metrics and tools that they’ve created. As Tuccillo said, “we need to embed deeper learning so the material can become second nature.” The concept: A user goes to The Green Grid Academy, gets themselves an avatar, and is then walked through a virtual world set up as a vendor-neutral data center. The user can then choose to take classes on various elements of operating a data center, such as Green Grid metrics.  It will be free. Right now it is available only to members, as the group wants them to vet it and find any bugs. They hope to have it ready for everyone in the first half of this year.

“This will give us a platform for end user type education,” Monroe said.

Making data center end user progress

When The Green Grid formed more than two years ago, it was mostly a conglomeration of vendors, and it took some heat because of that. Even now, the board of directors all come from vendor companies — AMD, APC, Dell, EMC, HP, IBM, Intel, Microsoft, and Sun.

But it has made some progress in bringing more end users into the fold. Last fall it formed an end-user advisory council, a 10-member group whose members include data center users from AT&T, British Telecom and eBay. Their job is to guide the board of directors so the focus of The Green Grid doesn’t lean too far on the IT supplier side. They’re involved in the data center design guide mentioned above, for example. And The Green Grid has been strict about who gets on the advisory council. A company like Microsoft could conceivably have a representative on there, since it’s one of the largest builders of data centers in the world. But The Green Grid determined that any IT vendor could not have representation on the end user council.

In addition, Tuccillo and Monroe said they’re growing their end-user ranks. The Green Grid has 200 members now, with about 18% or 36 members being solely end users.

February 2, 2009  7:04 PM

Data center fan efficiency hubbub from ASHRAE

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The following is a guest post from Vali Sorell, an associate partner in the critical facilities division at the Syska Hennessy Group. Sorell was a speaker at the ASHRAE Technical Committee (TC) 9.9 sessions in Chicago last week, and had some comments about our story on data center air-conditioning fans.


We all agree that reducing the fan speed saves energy, regardless of whether that fan is driven through a VFD (variable frequency drive) or if that fan is driven by an EC (electronically commutated) motor. The person from 365 Main stated that “it’s not worth the extra cost to have fans run at a lower speed for such a short time.” That misses the point – the fact is that it IS possible to save energy at all times, and that lower speed does not occur for a short time. That lower speed occurs forever! This is best explained by an example.

Assume a data center has 100 CRAC units, 80 of them are needed to meet the loads, and 20 of them are needed for redundancy. This amounts to 25% redundancy, which is very typical for most data centers. Let’s also assume that the load is constant from now till forever (meaning that the part load conditions are not considered, i.e. they are history).

Case 1: 80 units running at 100% speed consume 80/100 = 80% of the possible fan energy use.
Case 2: 100 units running at 80% speed consume 80/100 x 80/100 x 80/100 = 51% of the possible fan energy use.

Compared to normal operation, using all of the available redundant CRAC units at variable speed (regardless of whether that variable speed is achieved by EC motors or VFD’s) consumes 100%-(51%/80%) = 36% less fan energy than running the load-required complement of CRAC units at full speed. That’s not a small amount of energy!


January 28, 2009  5:39 PM

More on ASHRAE’s expanded recommended data center temperature, humidity ranges

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

In a story earlier this week, I reported on the American Society of Heating, Refrigerating and Air-conditioning Engineers (ASHRAE) TC 9.9 expanding its recommended guidelines for data center temperature and humidity ranges. One person who wanted that range to expand beyond what it did was Christian Belady, the principal power and cooling architect at Microsoft. Belady gave me the following image to make his point (click the picture for a much larger version):

This is a typical psychrometric chart that looks at dry bulb temperature, wet bulb temperature, dew point, relative humidity and other factors. Because there are several variables involved, you often get these odd shapes when plotting ranges.

As the key in the upper left shows, the solid red block was ASHRAE’s previous recommended range. The red outline is its current recommended range. The blue block is ASHRAE’s allowable range, which ASHRAE defines as an environment able to support IT equipment running, but with marked breaking down of equipment compared to the recommended range.

Then comes the yellow block, which is the typical vendor specification. Belady’s argument is that ASHRAE should be pushing beyond that yellow block, not cozying up within it. If the vendors will warranty equipment within that yellow block, why should ASHRAE have a range inside it?


January 27, 2009  10:31 PM

Perforated tiles can be your friend — even in the hot aisle

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

With the American Society of Heating, Refrigerating and Air-conditioning Engineers expanding its recommended data center temperature range, you have to wonder: Should you start wearing a bathing suit when doing data center maintenance?

ASHRAE TC 9.9 now recommends data center temperatures as high as about 80 degrees Fahrenheit. But that is to be measured at the location of the server inlet. How about on the other side, in the hot aisle? The difference between cold and hot aisles, often referred to as Delta T or just ΔT, can be as much as 50 degrees Fahrenheit. Which means hot aisle temperatures could approach 130 degrees Fahrenheit, and if the equipment is live, that means 130-degree air blowing in your face. Not exactly ideal working conditions.

So what to do, ASHRAE members wondered? Some think that server manufacturers need to start redesigning their boxes so they can be accessed and maintained from the front in addition to, or instead of, from the back. That way data center staff could work in the more-tolerable cold aisle where heat stroke is less likely.

Another option is simply to pull up a tile where you’re going to work in the hot aisle, and replace it with a perforated tile. That way you can get a nice chilly gust of cold air blasting from your feet to counteract the furnace blowing in your face. Sure, putting perforated tiles in the hot aisle is considered a severe no-no in well-designed hot/cold aisle data center configurations.

But if it’s temporary, and it can prevent the need to have an IV bag of fluids on site just in case of severe dehydration and overheating of employees, well, then it might be worth it.

Oh, and if you’re not working in a raised-floor environment, you might be out of luck. Maybe you can invest in a couple oscillating fans.


January 26, 2009  10:17 PM

Data center corrosion: Bring it on?

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

CHICAGO — Joseph Prisco, an IBM engineer, yesterday told ASHRAE TC 9.9 members that it might be a good idea to start monitoring data center pollution — particulate and gaseous contamination that could cause IT equipment corrosion.

In particular, Prisco singled out ionic chemical compounds such as sulfur and chlorine salts, as well as outside gases such as sulfur dioxide and hydrogen sulfide that can make their way into the data center. He said all data centers should start looking to prevent this data center pollution from happening, and that those facilities using airside economizers should definitely be in on the mix.

William Tschudi, a member of the Lawrence Berkeley National Laboratory, had some questions about that. LBNL did a study about two years ago that found no discernable damage to IT equipment from using outside air to run a data center. He asked Prisco yesterday whether equipment corrosion happened quickly enough to shorten the normal server refresh rate. Prisco said it depends on the environment.

Another angle: Who cares?

Christian Belady, the principal power and cooling architect at Microsoft, said he’s not that concerned about equipment corrosion. He said a better way to look at it is to expect the refresh rate to be short. That way you can replace the equipment, which tends to get more energy efficient with each iteration. So in this way, Belady was almost encouraging the data center pollution. He was saying that equipment corrosion could help overall data center efficiency.

It may be good to keep those comments in context. Microsoft buys tens of thousands of servers every year. Most of them are x86 servers. They’re relatively cheap and therefore disposable. In the case of more-expensive Unix and mainframe servers, it may be more prudent to keep an eye on the corrosive dusts and gases that can find their way into the equipment.


January 26, 2009  9:37 PM

Raise the data center temperature, but bring your earplugs

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

CHICAGO – The American Society for Heating, Refrigerating and Air-conditioning Engineers (ASHRAE) has expanded recommended guidelines for ambient data center temperatures, but they’re warning that the new range could cause data center noise increases.

There has been a lot of talk recently about raising data center temperatures to improve energy efficiency, as the air conditioners don’t have to work as hard to cool the room. ASHRAE TC 9.9 recently changed its recommended upper data center temperature from 77 degrees Fahrenheit (25 degrees Celsius) to 80.6 degrees Fahrenheit (27 degrees Celsius). Munther Salim, a mechanical engineer at HP EYP Mission Critical Facilities, said raising the set points in CRAC units is the “number one thing you can do to save money.”

Google is raising data center temperatures. So is Microsoft and Intel. But Michael Patterson, a thermal engineer at Intel, warned that raising the data center temperature could have an effect on “acoustical noise levels.”

“Servers with (variable frequency drive) fans on servers — the increase in power comes mostly from the increase in fan power after 25 degrees Celsius,” he said. “Servers in 27 degrees Celsius may have higher acoustics due to higher fan speed.”

Patterson showed the following graph:

As you can see, fan power (the orange-reddish line) rises exponentially after about 25 degrees Celsius, which happens because of the increase in fan speed to keep the server components cool enough in a warmer environment. According to an ASHRAE document on the extended environmental envelope, “it is not unreasonable to expect to see increases in the range of 3-5″ decibels if the ambient temperature increases from 25 to 27 degrees Celsius.

“Data center managers and owners should therefore weigh the trade-offs between the potential energy efficiencies with the proposed new operating environment and the potential increases in noise levels,” the document states.

Are there any other solutions? Some suggest reversing the trend of server manufacturers to miniaturize components. Just a few years ago, 1U servers might have one single-core processor. Now they might have multiple quad-core chips. That leads toward having to dissipate more heat in a smaller space. If servers were made bigger, the fans wouldn’t have to work as hard to do so much in such a tight area.

But that might not be feasible for some users. According to a SearchDataCenter.com survey earlier this year, 32% of users say that lack of space is most limiting their data center growth. Making servers bigger won’t help that.


January 26, 2009  4:13 PM

Sun Microsystems completes green data center in Colorado

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Sun Microsystems has completed a new data center in Broomfield, Colo., built with efficiencies that the company says will save $1 million a year in electricity costs.

The data center features overhead cooling using Liebert XDs, airside economizing and flywheel uninterruptible power supplies (UPSes) from Active Power.

The project came about when Sun acquired StorageTek back in 2005, so it’s been in the works for a few years now. Both companies had data centers in Broomfield that actually sat on opposite sides of Route 36, a major road, and Sun decided that it would consolidate the two into one. It was able to condense 496,000 square feet of data center space at the old StorageTek campus into 126,000 square feet in the new location, a move that is saving 1 million kwH per month.

The move is also slicing down the amount of raised floor space from 165,000 square feet to just 700 square feet — enough to support a mainframe and old Sun E25K box for testing. The elimination of that much raised floor, including the construction needed to brace it to support such heavy IT equipment, is saving Sun $4 million, according to Mark Monroe, Sun’s director of sustainable computing for the Broomfield campus.

The overhead Liebert XD data center cooling units feature variable speed drive (VSD) fans that allow the supply of air to range from 8kw up to 30kw per rack. The Active Power flywheel UPSes eliminate the need to have a whole room dedicated to housing UPS batteries.

“Flywheels are usually 95% to 97% efficient,” Monroe said. “Battery systems are usually in the low 90s, high 80s.”

Finally, Sun is using a chemical-free method to treat its chilled water system that takes advantage of electromagnetics. The new method allows Sun to reuse the water in onsite irrigation systems and not have to flush out the water as often. It will save about 675,000 gallons of water and $25,000 per year.

In total, the company will be cutting its carbon dioxide consumption by 11,000 metric tons per year, largely because Broomfield gets so much of its power from coal-fired power plants.


January 23, 2009  3:35 PM

ASHRAE TC 9.9 to tackle data center infrastructure at winter meeting

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Next week is the annual winter meeting for the American Society for Heating, Refrigerating and Air-conditioning Engineers (ASHRAE), to be held in Chicago. The group’s Technical Committee 9.9 focuses on data centers and has become an industry leader on the subject of data center infrastructure and data center cooling. ASHRAE TC 9.9 and has written books on topics such as data center liquid cooling, facility vibrations, and data center thermal guidelines.

I’ll be on hand to report from the data center-focused sessions. The ASHRAE meeting is typically a great place to see what data center cooling experts are talking about, and they often publish technical papers here for the first time. This year industry leaders such as Roger Schmidt from IBM, Christian Belady from Microsoft and William Tschudi from the Lawrence Berkeley National Laboratory will be speaking at the meeting. Here’s some of what ASHRAE TC 9.9 will be presenting:

  • “High Density Cooling Updates” will discuss particulate and gaseous contamination effects on computer reliability, data center air management metrics, and airside economizers.
  • “Application of ASHRAE Standards and Guidelines to Improve the Sustainability of Data Centers” will include discussion on the 2008 ASHRAE thermal guidelines, including its impact on server vendors, architects, engineers and end users. There will also be talk on data center rating systems for improved energy efficiency.
  • “Liquid Cooling Issues Update” will look at ongoing work to get liquid cooling closer to the heat source, and will include discussion of real-time data center energy efficiency at the Pacific Northwest National Laboratory.
  • “Innovations in Airflow Management within Rack Enclosures” will discuss cable management for equipment with side-to-side airflow, return air containment, hot- and cold-aisle isolation, and enhancing airflow efficiency with variable-speed drive (VSD) fans.


January 9, 2009  9:41 PM

Data Center Pulse uses online tools to build User Group 2.0

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Data Center Pulse, a DIY data center user group that started in November 2008, is poised to become one of the most influential organizations in IT. Founded by Dean Nelson, Senior Director of Global Lab & Data center Design Services at Sun Microsystems and Mark Thiele, Director Business Operations R&D at VMware, the organization has grown to nearly 500 members in just a few months. The organization is made up of 70% IT workers and 30% data center facilities managers, and spans from CIOs to CRAC unit technicians.

[kml_flashembed movie="http://www.youtube.com/v/cBAmc_9Nxhw" width="425" height="350" wmode="transparent" /]

The group got its start on LinkedIn, a career-based social networking site and has expanded to its own Data Center Pulse blog, YouTube Channel, and starting next month, face-to-face interaction with a meeting in the auditorium at Sun’s Santa Clara campus, February 17-18 2009.

Data Center Pulse’s tough love for vendors

The data center user group landscape is a pretty crowded field, with AFCOM, The Uptime Institute, 7×24 Exchange, and others, but Nelson and Thiele were frustrated with traditional user groups’ inability to push vendors to change and improve their products.

“7×24 or AFCOM, both of those groups are highly influenced by the vendors,” Thiele said. “Good information is shared, but it isn’t used to influence the vendor community.”

Instead of conferences packed with marketing and business development staff from vendors, Thiele and Nelson sought to create an exclusive group of data center owners and operators only. Nelson and Thiele use LinkedIn’s career information to screen every candidate, rejecting applicants that don’t meet their end-user criteria (including me when I applied last month).

“I don’t want to bash the 7×24, AFCOM, or Uptime, but there seem to be less and less users at those conferences,” Nelson said. “We wanted to get a group together to talk about what we care about, and to influence the industry from the end user’s point of view.”

You’re probably thinking, “Non-vendor data center user group, created by two guys that work for large data center vendors?” But Nelson and Thiele, both data center managers themselves, say the group is not a platform for Sun or VMware at all. I can’t verify that at this point, since I haven’t figured out a way to breach the Data Center Pulse inner-sanctum, but I’m willing to give them the benefit of the doubt.

At the upcoming summit, Data Center Pulse plans to hash out and publish its top ten demands of the data center industry, and Nelson expects it to be a controversial manifesto. “Vendor marketing is driving the end user, instead of giving the end users what they’re asking for. The end users have information that isn’t getting back to vendors.”

Web 2.0 brings the data center community closer together

In these days of slashed corporate travel budgets and eco-awareness, it’s getting harder for companies to justify putting the data center team on a flight to Orlando or Vegas every few months. Instead, Data Center Pulse is using social networking, blogging and other online tools to create an interactive online community.

“With LinkedIn, everybody can touch everybody. You can publish a discussion that reaches 452 people that have jobs similar to yours in real-time,” Nelson said. “LinkedIn is how I stay in contact with everybody.

Nelson also plans to use Google Apps to be able to handle email, calendars, collaboration tools for the group, as well as Webex and Skype video conferencing. “Utilizing that technology to allow us to collaborate is very important. We’re using all the resources we can,” Nelson said.

These social network tools have enabled Data Center Pulse to reach so many so quickly.

“Response from the community so far has been really positive,” Nelson said. “There are no hidden agendas in this — it’s all about trying to drive the industry.”

For more on the evolution and importance of data center user groups, check out our Data Center Advisory Panel discussion.


December 19, 2008  8:38 PM

Biggest data center stories of 2008

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

What were the biggest data center stories of 2008? This list is our top ten:

1. Hot-Cold Aisle Containment: Physically separating the hot and cold aisles in the data center is on every data center manager’s to-do list this year. The top articles on this topic include:

  • Hot-aisle/cold-aisle containment and plenum strategies go big-time
  • Yahoo turns to wireless data center monitors, cold-aisle containment to lower PUE
  • Hot-aisle/cold-aisle containment stokes fire-code issues
  • ADC data center aiming for 1.1 PUE, LEED Platinum
  • Companies reuse data center waste heat to improve energy efficiency

    2. IBM mainframe news: When IBM rolls out a new mainframe platform, you can bet it will be big news with the data center readership. Here are our top mainframe stories.

  • IBM pushes System z10 mainframe as consolidation savior
  • IBM welcomes z10 mainframe’s new sibling to the family
  • Mainframe virtualization improves total cost of ownership on IBM z10
  • Mainframers go for a jog at Share user group conference
  • Should you deploy a Linux-only mainframe?

    3. Windows Server 2008: With the release of Windows Server 2008 on Feb. 27, we wanted to know what the system’s new features meant for users. What are the additional hardware requirements? How does Windows Server 2008 stack up against Linux? And what are the roadblocks for Microsoft’s virtualization offering Hyper-V?

  • Windows Server 2008: What’s in it for users?
  • Microsoft Windows Server 2008 features prolong server hardware life
  • Windows Server 2008: PowerShell basics and top commands

    4. Purchasing Intentions Survey: The Data Center Decisions 2008 Purchasing Intentions outlined trends in server and software purchasing, data center infrastructure and more. This year’s survey also included salary and career information. For findings and analysis on data center spending and technology adoption trends, see the contents of this special report below:

  • Data center purchasing trends overview
  • Server purchasing decisions in 2008
  • Virtualization goes mainstream, warts and all
  • CMDBs gain favor in data center budgets
  • Data center energy a concern, but metrics lacking
  • Is Linux growing at Windows’ or Unix’s expense?

    5. Widespread adoption of economizers: SearchDataCenter.com has been following the evolution of economizer cooling technology. What was once seen as impractical for data centers is now widely accepted as a best practice, even in climates like Atlanta, Georgia.

  • United Parcel Service’s Tier 4 data center goes green with economizers
  • Data center cooling: Air-side and water-side economizers tutorial
  • Data center cooling economizers save energy costs, says Equinix

    6. Intel and AMD battle continues. There hasn’t been much change in the x86 space this year. HP is still the favorite x86 server vendor, a good portion of data centers still favor rack servers over blades, and the vast majority use Microsoft Windows as their primary operating system. But users are looking for ways to decide which commodity server platforms to choose from. Here are the top x86 stories of 2008.

  • Server memory stalling performance, energy-efficiency gains
  • Virtual machines per server: A viable metric for hardware selection?
  • Virtualization savings often spent on hardware upgrades, users say
  • AMD releases 45-nm Shanghai Opteron processor on schedule

    7. Data Center Manager of the Year. In January 2008, SearchDataCenter.com put out a call for entries for our first Data Center Manager of the Year competition. This award recognizes excellence in data center project management. Nominations poured in from colleagues, supporting vendors and the contestants themselves.
    The data center projects included consolidations, platform migrations, data center moves/renovations, certification projects, and new construction. These projects took place during the 2007 calendar year.

  • The Planet’s Jeff Lowenberg named Data Center Manager of the Year
  • Logistics firm coordinates multisite data center consolidation
  • Terremark data center manager puts all skills to work
  • Data center saves $700K renovating DR site and severing SunGard contract

    8. Microsoft’s Manos and Belady shake up the industry: Earlier this year, I predicted ASHRAE TC 9.9 would be the organization that would drive new thinking in greening the data center. If you’d have told me that Microsoft was going to be the organization shaking up the industry, I’d have laughed. But it looks like the joke’s on me.

  • Microsoft spills the beans on its data center strategy at AFCOM
  • Microsoft rolls out container data center strategy for cloud computing
  • Microsoft shows off Scry, Chicago data center video

    9. CFD modeling: Computational fluid dynamics tools for the data center have advanced and gained wider adoption in 2008:

  • Can CFD modeling save your data center?
  • Do you use kw/rack and cfm/kw to determine cooling capacity? Beware
  • Economizer performance: Applying CFD modeling to the data center’s exterior
  • When best practices aren’t: CFD analysis forces data center cooling redesign

    10. Economic Crisis. How will data centers weather the current economic downturn? It is the No. 1 question facing data center operators going into 2009.

  • With economic downturn, Dell, HP cut U.S. server prices
  • AFCOM keynote: Will data center budgets survive economic woes?
  • How will data centers weather the economic downturn?

    Are we missing any major developments? Let us know in the comments!


  • Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: