Data center facilities pro

ACRHIVED. Please visit our new blog at: http://itknowledgeexchange.techtarget.com/data-center/


September 25, 2008  6:04 PM

Colo wagers on high-density



Posted by: Mark Fontecchio
Data center colocation, DataCenter, High density data center

powerloft.JPGThe idea for Power Loft, according to its president Jim Coakley, happened back in 2005, when he and EYP Mission Critical Facilities head Peter Gross were having some drinks. The idea was this: Focus a colocation company on high-density data center space.

That’s what Power Loft is doing today, aiming to scale his facilities up to 300 watts per square foot, which ends up being about 10kw/rack, according to Coakley.

Now the company is building a 200,000-square-foot facility in Manassas, Va. Coakley is hoping that construction, which started last year, will be done in the first quarter of 2009.

“We have not yet leased,” he said. “We’re still in the second phase of construction and activity looks good. I’m curious to see if we do get the high-density crowd like we were hoping. I’m pretty confident that we can operate more efficiently than anything I’ve ever been involved in.”

What’s the benefit for customers? Well, some want that level of high density to accommodate high-density equipment of their own, such as blade servers. Others simply want the ability to scale from 150 watts per square foot and go up from there, without having to rent out more floor space. That is something Power Loft is offering.

EYP, which Coakley has dealt with since about 2002, is assisting Power Loft with the design and construction, and will continue to do the same with future facilities. EYP is now a divisional company of Hewlett-Packard, having sold to them last year. Power Loft is one of the dozens of cases that HP is touting as new data center service customers for the company.

For its Manassas facility, all of the power and cooling infrastructure is on the first floor, and all the data center space is on the second floor. They’re also building a “power transmission backbone” that will be capable of expanding in AC or DC power. DC power has become a point of interest for some data centers because of the possibility of saving energy costs by eliminating some of the transfers between AC to DC and back again.

Power Loft has combined eight 20-ton CRACs into a single, massive, 160-ton air handler in the hopes of operating the cooling load more efficiently. The design is LEED-certified and includes a green roof — what Coakley called a “Chia Pet roof.” It also has vines on the outside of the building and will use waterside economizers to help save on power costs in the cooler months.

Coakley said that after Manassas, Power Loft hopes to expand to San Antonio, Colorado and Atlanta, and then possibly tackling the overseas market after that. Powerloft is using Total Site Solutions, a Columbia, Md.-based company, to maintain the data center facilities once they’re built.

September 19, 2008  3:47 PM

Do you use kw/rack and cfm/kw to determine cooling capacity? Beware



Posted by: Mark Fontecchio
data center cooling, DataCenter

A common metric for determining cooling capacity in data centers is kw/rack, or kilowatts per rack, and cfm/kw, or cubic feet per minute per kilowatt. By figuring out how much power your server cabinet is drawing, the logic goes that you’ll then know how much cooling you need.

From looking around, it seems the consensus is that you need anywhere from 80 to 160 cfm per kw of power load. First off, that’s a wide range. The top end of it is twice the amount of cooling as the bottom end of the range. Figuring out just the right amount — so you’re not putting equipment at risk of overheating or overprovisioning cooling resources and therefore wasting money — is a tricky task. Especially if you’ve got a whole row of racks, and a bunch of these rows, and their heat load and cooling needs all affect each other.

But still, let’s keep it simple. Let’s say you have only one rack of servers, and you need to determine how much cooling the rack requires. The answer is that it depends on how you align your IT equipment. Take a look:

server-configuration.JPG

This graph comes from Future Facilities, a computational fluid dynamics (CFD) software company that analyzes airflow patterns in data centers. In the picture on the left, blade servers are positioned above 3 1U servers. On the right, the blades are on the bottom. Same power load (kw), same cooling resources (cfm). As you can see, the blades run a lot hotter when they’re on top — about 81 degrees Fahrenheit compared to about 65 degrees.

“The two cabinet configurations contain the same units of equipment. The only difference is stacking order,” Sherman Ikemoto, the general manager of Future Facilities for North America, wrote in an email, adding later that in this case, kw/rack “is not enough information to know if the cooling requirement for the equipment will be met.”

So now is the time to just throw your hands in the air and shrug your shoulders, right? Either that, or overprovision cooling, because if your power bill is a little more expensive, you can deal with that heat from management. But if the equipment shuts down because there isn’t enough cooling, that’s your job.

But there are ways to figure it out. You could hire a consultant to use the fancy-shmancy software that Future Facilities sells. Most of you would hire the consultant to do this, because using the software yourself is complicated, and besides that, it’s very expensive. Or you could put temperature sensors along the back of your racks to measure how hot it is at different levels of your rack, and adjust your server stacking configuration accordingly for new server cabinets. I’m sure there are other ways to do this, and if you have any ideas, leave a comment below.

Ikemoto said it’s all about thinking of the entire data center — and not just individual cabinets — as the ecosystem to focus on when it comes to power and cooling.

“To me, it’s an expression of Moore’s Law in a form that is relevant to the data center level (as opposed to chip level) and thus expresses a connection across the entire supply chain from the chip to the room,” he wrote.


September 17, 2008  5:12 PM

Air velocity out the back of servers



Posted by: Mark Fontecchio
Data Center airflow, data center cooling, DataCenter

Last week I spoke to Sherman Ikemoto from Future Facilities, which sells computational fluid dynamics software for analyzing airflow in data centers. I wrote about some of the issues Ikemoto found among clients in doing hot/cold aisle containment. But another slide Ikemoto brought up raised my interest: the increased air velocity out the back of server cabinets over the years. Take a look at this graph:

increased-air-velocity.JPG

As you can see, the velocity was such that the air coming out the back of server cabinets in 2003 was just a few feet. Last year it was more than 12 feet, and Future Facilities estimates that it will be around 25 feet in another five years. Not only that, but hot air will be coming out (as shown by the red) 12 feet in five years.

“In 2003, you didn’t have to worry about airflow in the data center,” said Ikemoto. “Now you have to watch out for the air going into the back of the cabinet across the aisle.”

The result? You might have to make your hot aisles bigger than what what they once were. And that cramps your data center’s space.


September 11, 2008  8:42 PM

The Google pirates, corporate tax, and ice-side economizers



Posted by: Mark Fontecchio
data center cooling, Google data center

Google water based data centerEarly last year, Google filed a patent for a water-based data center. It would include a data center floating in the ocean, somewhere between 3 and 7 miles offshore, getting its power from Pelamis machines that can produce energy from crashing waves. The documents on the patent just hit the U.S. Patent Office’s Web site this week, and Slashdot caught onto it. Other news organizations followed. What there hasn’t been much discussion on are issues such as uptime, power capacity, corporate taxes, data center pirates, and the impact on the environment.

Uptime and power capacity

Miles Kelly from 365 Main, a data center colocation company,  raised an interesting point about uptime and power capacity for a water-based data center.

“I think it’s an innovative idea, though generating 10 to 20 megawatts of power needed for a large modern data center is not a simple undertaking,” he wrote in an email to me. “I’m not a wave-power expert, but I’m certain one will need more than wave power to run servers on the ship (assuming there are more than a handful of servers).”

Uptime is another issue:

“Google may not be planning for essential computing load on the ships, meaning it’s okay if the servers on the ship go offline for one reason or another.  It’s possible they’d view the floating computing capacity as temporary or best effort, but would have other data centers available to pick up the slack if needed.”

Corporate taxes

Much of the stories around the Google patent speculated that building data centers offshore would allow the company to place its jurisdiction outside of the United States, thus ensuring that everyone’s privacy is secured. But there is another side effect to this. If outside of U.S. jurisdiction — and indeed, every country’s jurisdiction — Google may be able to avoid real estate and other corporate taxes that it would otherwise have to pay, even if building alongside the Columbia River in Oregon or in the middle of Iowa.

If they were to avoid corporate taxes in the U.S., that would be pretty amusing. Imagine if Google was able to use the U.S. government’s patent office to protect intellectual property that could then be used to avoid U.S. taxes.

Unsurprisingly, Google doesn’t look at it this way. Its justification for a water-based data center is as follows:

…it can be beneficial to distribute computing power closer to users. As such, data centers may be moved closer to users, with relevant content sent from a central facility out to regional data centers only once, and further transmissions occurring over shorter regional links. As a result, every request from a user need not result in a transmission cross-country and through the internet backbone–network activity may be more evenly balanced and confined to local areas. Also, transient needs for computing power may arise in a particular area. For example, a military presence may be needed in an area, a natural disaster may bring a need for computing or telecommunication presence in an area until the natural infrastructure can be repaired or rebuilt, and certain events may draw thousands of people who may put a load on the local computing infrastructure. Often, such transient events occur near water, such as a river or an ocean. However, it can be expensive to build and locate data centers, and it is not always easy to find access to necessary (and inexpensive) electrical power, high-bandwidth data connections, and cooling water for such data centers.

Kelly added that “any breaks from operating in international waters would be offset by the costs of operating so remotely.  Plus, the incentives offered by aggressive regions in the US
are already quite handsome (north/central Washington for example, another place where Google operates).”

This is a good point, but I wouldn’t overlook the tax benefits to Google for this plan. The company has already shown that it will figure out a way – including by bullying local and state governments – to pay as little taxes as possible, all the while employing far less people than would an aluminum smelter.

Data center pirates

So Google will become the new pirates of the ocean, their data centers spread across the seas, Sergey Brin and Larry Page standing at the helm, complete with their eyepatches, swords, single earrings, and maybe hooks for hands and parrots on their shoulders to top things off.

What’s next, ice-side economizers?

pelamis.JPGSo Google has cultivated the Columbia River for energy. Now it wants to colonize the ocean with data centers. Let’s take a look at this plan.

To the right is a picture of a Pelamis Wave Energy Converter, which is what Google proposes to use to power its water-based data center. Each Pelamis machine is almost 400 feet long — longer than a football field — and as thick as some redwood trees. They produce 750kW of power each. In Google’s patent application, the company presented a scenario of about 40 of these machines spread over a square kilometer and producing 30 megawatts of power. Not exactly non-intrusive.

Perhaps Google will next take its engineering prowess up to Antarctica, where it will figure out a way to harness ice shelves to cool servers. In the next couple years, I wouldn’t be surprised if a new Google patent application popped up for ice-side economizers. But hey, as long as I can continue making super-fast Internet searches, who cares, right?


September 11, 2008  5:41 PM

Green data center roadmap doesn’t have to be confusing



Posted by: Matt Stansberry
Green data center

I just spotted a press release on a series of new analyst reports from Gartner on Green IT. Gartner says users should focus investments on areas that will provide the most bang for the buck. Unfortunately, their recommendations are vague. Gartner suggests pursuing tactics like “Changing people’s behaviors” and “Modern data center facilities design concepts”.

Does your CIO actually have undefined directives like this up on a whiteboard? Are your execs researching carbon offsets and investing in videoconferencing infrastructure for remote workers instead of dealing directly with the data center? The answer is probably yes.

There is still a huge disconnect between facilities and IT. In a recent SearchDataCenter.com survey, one-third of respondents said they didn’t even know how their 2008 power bills compared with their 2007 power bills.

Here is a green data center roadmap based on expert interviews and advice from SearchDataCenter.com.

Step 1, Get a mandate from the C-level execs: Chances are, your execs are talking about going green but have no idea how huge the data center’s energy footprint is. Put together a 60-second presentation on who pays the data center electric bill. What does it cost to keep the servers running per month? What are the cost trends for data center power over the past 24 months? How much additional capacity do you expect to need in the coming period? And how much would data center expansion cost? Don’t talk about kWh. Instead, explain that you are spending x-amount of dollars to provide this service today, and you think you can reduce the dollars you spend and provide the same level of service.

Step 2, Get started with PUE or whatever metric you like: Set up a simple metric like Power Usage Effectiveness to get a baseline of how much power is going to the servers and how much is being lost on cooling and infrastructure. Set goals to improve the ratio. Measure it in the same way consistently, over time. Keep an eye on ASHRAE for more specifics on green metrics best practices.

Step 3, Get rid of your dead and dying servers: The Uptime Institute estimated about a third of the servers in your data center are dead weight. Audit your hardware, decommission the non-functional machines and get them off the grid. Too often, companies pull servers out of production, only to put them somewhere else (test and development sandboxes) or worse, leave them up and running where they are. Decommissioned servers get lost in the shuffle. So how to you get rid of them? It takes brute force:

  • Round up the legacy servers in your data center figure out what they are supposed to be doing.

  • For the unknowns, take them through all the lines of business and ask “Whose servers are these?”
  • If you end up with 30 orphans, you give it 90 days and pull the plug. If nobody screams, they’re not being used.
  • If someone does scream, they needed the application, they don’t need the server. You’ve already shut that five year-old boat anchor off — port the application over to a more energy efficient Sun T2000 or new Dell server and speed up the whole thing.

As a next step, look to virtualize and consolidate workloads, turn on power-saving features and buy energy efficient hardware whenever possible.

Step 4, tune up that hot-cold aisle configuration: Over the past several years hot-cold aisle has became the de facto data center standard. But as server density has increased, efficiency gains have eroded. Hot aisle/cold aisle looks neat in drawings, but in practice, up to 40% of the air doesn’t do any work. Hot air goes over the tops of racks and around the rows, or up through holes in the floor where it isn’t needed. Data center pros can make huge gains by sealing up the floor and right-sizing the CRAC units to the new configuration.

  • Eighteen inches is the minimum recommended raised floor height — 24-30 inches is better, but not realistic for buildings without high ceilings.

  • Keep it clean! Get rid of the clutter under your raised floor, like unused cables or pipes. Hire a service to clean it periodically. Dust and debris can impede airflow.
  • Seal off cable cutouts under cabinets, spaces between floor tiles and the walls, or between poorly aligned floor tiles. Replace missing tiles or superfluous perforated tiles.
  • Use a raised floor system with rubber gaskets under each tile, which allows tiles to fit more snugly onto the frame, minimizing air leakage.
  • There are several products on the market to help data center pros seal up raised floors, including brush grommets, specialized caulking and other widgets.

For more info on blocking holes in your raised floor, check out Bob McFarlane’s tip on SearchDataCenter.com.

Taking this a step further, some data centers are containing the hot and cold aisles with plastic sheeting and plenum systems and achieving huge energy savings. When making these adjustments, data center managers need to carefully check how changes to their systems are affecting temperatures and energy usage across the data center. Check out this review of CFD analysis for more info on the tools for this job.

None of these suggestions are revolutionary, high-tech or even particularly confusing. The best advice for IT execs struggling with “Green IT” is to take a true accounting of the energy costs in the data center, and then sit down with the facility engineers and IT teams in the same room and hash out a roadmap.


September 10, 2008  5:09 PM

Another drawback to hot/cold aisle containment



Posted by: Mark Fontecchio
data center cooling, Data center fire prevention, DataCenter

Last week I wrote about the fire code issues around containing your hot and/or cold aisles for energy efficiency benefits. And while I wrote pretty extensively on the issues that can arise, I didn’t touch on the financial implications of fixing those issues.

Take the issue around fire sprinklers, for example. I wrote about how one company, Advanced Data Centers (ADC), is planning on using duct work at the back of the IT equipment cabinets to isolate the hot aisle. Because of this, however, they’re planning on putting a separate sprinkler head inside each contained hot aisle.

Other data centers have installed vinyl curtains between the hot and cold aisles that have fasteners that will melt at 130 degrees, thus causing the curtains to drop and ensuring the existing sprinkler heads can reach everywhere. But an engineer at The Uptime Institute recommended adding sprinkler heads anyway, just in case the curtains and fasteners malfunction.

Now, if you’re building a new data center and design all these sprinklers into the system, it’s not too bad. But what if you’re looking to retrofit?

That’s what one client of Future Facilities is dealing with. I just talked to Sherman Ikemoto from Future Facilities, which sells computational fluid dynamics (CFD) software that does complex mathematical modeling of airflow in data centers. Ikemoto said that one client estimated that fixing the sprinkler system in their 7,000-square-foot data center would set them back $150,000.

“He wanted to do plastic curtains for containment,” Ikemoto said. “But to maintain compliance with fire code, he would have to change the sprinkler system in the room.”

I’m hoping to do a full case study on this particular client of Future Facilities. In the meantime, this is another factor to consider if you’re thinking about hot/cold aisle containment. There will also be some interesting tidbits from my call with Ikemoto that I’ll publish here soon.


September 3, 2008  4:32 PM

ASHRAE and Green Grid team up to standardize data center energy monitoring protocols



Posted by: Matt Stansberry
data center cooling, Data Center Metrics, Green data center

I recently spoke with Roger Schmidt, distinguished IBM technologist and past chairman of The American Society of Heating Refrigerating and Air-conditioning Engineers (ASHRAE) Technical Committee 9.9 about ASHRAE’s newly official partnership with The Green Grid. The new agreement will open up the collaboration between the two groups, allowing them to write white papers and books together.

In fact, they’re collaborating on a book right now: Real-time power consumption measurements for data centers. Tahir Cader, a technical director for ISR Inc. is leading that effort. The plan is to publish this book through ASHRAE as a joint ASHRAE and Green Grid publication. You can expect this book to be available first quarter 2009.

“This is a really important book for data center operators,” Schmidt said. “It will help them understand where energy is being used in their data centers. What are the tools, where are the measurement points?” This book will explain how to calculate PUE and how to measure the various elements in the data center: pumps, chillers, cooling towers, and UPS systems. It will also explain how to measure humidity, lighting, and IT equipment load.

Other ASHRAE publications on the horizon
ASHRAE T.C. 9.9’s eighth book is in the final stages of the review process. It is called Particulate and gaseous contamination in datacom environments. Joe Prisco of IBM is leading that publication. TC 9.9 hopes to have it available for the book store at the ASHRAE winter conference Jan 24-28, 2009.

This book covers the nitty-gritty of gases and particulates that can contaminate data center equipment and cause failures. “What are the filters I can deploy? How do we get a handle on what is the acceptable limit of various gaseous contamination? Bromine, Chlorine, the sulfur contaminants — we don’t know what levels cause equipment failures,” Schmidt said. “We’ve had failure in the field because we’ve had gases in clients’ data centers. It does ruin equipment. You’ve got to be really careful.”

Two others publications are in the works: The extreme green data center and Green tips for datacom equipment centers. These publications won’t be ready until the summer conference 2009.

ASHRAE T.C. 9.9 and a new DOE data center workshop
ASHRAE T.C. 9.9 is also partnering with the Department of Energy to combine the DOE’s data center energy efficiency workshop with ASHRAE’s workshop in New York State, to create a national workshop by the end of the year, according to Schmidt. “It will be a one or two day event, sponsored by the DOE and ASHRAE. We would provide one of our datacom books for the workshop. This would be a great advertisement for TC 9.9.”

ASHRAE to integrate the new thermal envelop
The newly expanded environmental envelop approved in the last month for data center equipment will be part of an update to ASHRAE TC 9.9′s first book: Thermal Guidelines for Data Processing Environments. “That new envelop will allow people to use wider temperature and moisture range for datacom equipment — it’s really an important document,” Schmidt said.


September 3, 2008  4:25 PM

New certification from ASHRAE, but does it matter?



Posted by: Mark Fontecchio
Data center certification, Data Center Jobs, data center staffing, DataCenter

The American Society of Heating, Refrigerating and Air-conditioning Engineers, better known in the industry as ASHRAE, has a new certification program focusing on facility operations and performance, something of which data center operators can take advantage.

But should they take advantage of it, and does it matter?

In a survey this year of SearchDataCenter.com readers, we found that almost half (47%) of the 579 data center IT and facility employees we questioned had “no certifications to date.” Furthermore, more than two-thirds said that certification has neither been a factor in hiring, promotion, nor a salary increase/bonus.

So then the question becomes: Why bother with certification?

Those that offer it – such as ASHRAE, Marist College’s Institute for Data Center Professionals, and APC’s Data Center University – claim that the certifications help data center pros keep up-to-date on what’s going on in the industry. Here’s a blurb from the IDCP site:

…the mission of the IDCP is to support the professionals responsible for and working in data centers by providing a variety of credit-bearing and non-credit classes appropriate for employee development and training.

And from Data Center University:

The changing nature of data centers‚ and the technology that impacts them‚ makes it even more critical that employees remain up to date on the current theories and best practices for issues around topics of power‚ cooling‚ management‚ security‚ and planning.

There is no question that there is a knowledge gap when it comes to finding comprehensive data center pros. Pete Sacco, president of PTS Data Center Solutions, told me that he has a difficult time finding people with the breadth of knowledge in both IT and facilities management that he can hire. It usually requires education – either in academia or in the workforce – of computer technology and engineering, and he said that not too many people out there have it. And even though going to data center conferences and events that groups like AFCOM, The Uptime Institute, The Green Grid and Gartner put on can help, it may not provide that level of detailed education that you need to solve problems in the field.

Many of the data center managers out there started as overall facilities managers, and are now taking on the task of handling energy-sucking data centers, which are completely different animals from handling the HVAC in your typical office environment.

I think that although certification isn’t popping up now as being important, it may in the future. And even if it doesn’t, the knowledge required to handle these data centers is and will continue to be important, especially if that knowledge becomes rarer in the future than it already is now.


August 28, 2008  1:26 PM

Powering down your servers



Posted by: Mark Fontecchio
Data center power management, DataCenter

I didn’t know this, but I guess yesterday was “Power IT Down Day.” Hewlett-Packard, Citrix and Intel joined forces to propose the idea of asking PC users to install power management software that can decrease the power use of their machines when not in use.

I found out about “Power IT Down Day” this morning from Ken Oestreich, the director of product marketing at Cassatt Corp., who wrote a blog post about “Power IT Down Day” in his Fountainhead blog yesterday. Oestreich wrote that, although it’s a good idea, Power IT Down Day is also “missing the boat” because its sponsors stress PC power management, but not server power management. He goes on:

At the time of this writing, the official website at HP showed over 2,700 participants, saving an estimated 35,000 KWh. But here’s a sobering statistic: At a recent Silicon Valley Leadership Group Energy Summit, Cassatt piloted Server power management software. The organization using the software operated a number of its own data centers — and the case study findings showed that if this software were used enterprise-wide, the annual savings could be 9,100KWh for this enterprise alone.

You’ll never guess what Cassatt does. That’s right! It makes server power management software.

But the fact remains that Oestreich has a point. In a report to Congress last year, the federal Environmental Protection Agency recommended server power management as one way to reduce data center energy levels. Other industry groups like The Green Grid and The Uptime Institute recommend the same.

The good news is that data centers are listening. In our own purchasing intentions survey last year, only 18% said they were using power-down features on their servers, with another 13% saying they planned to sometime last year. Those numbers have since jumped. According to our new survey results from this year, 31% have implemented power-down features, and another 22% said they plan to sometime this year.

So I guess data center managers have gotten the hint. Maybe “Power IT Down Day” doesn’t have to be extended after all…


August 20, 2008  7:02 PM

Microsoft expected to announce Des Moines data center location tomorrow



Posted by: Mark Fontecchio
Data Center Jobs, data center staffing

According to the Des Moines Register, Microsoft is expected to officially announce its plan to build its newest data center there, specifically in West Des Moines.

Microsoft had already said they were looking in the area for a suitable data center location. It is expected to compare in size to one Microsoft is building in San Antonio, Texas, and employ about 75 high-tech workers at about $70,000 a pop.

Iowa has become a popular place lately for data center facilities, with Google already there building in Council Bluffs, which is right next to Omaha, Neb., about 120 miles west of Des Moines (which in turn is about 300 miles west of Chicago, for those unfamiliar with the Midwest). Why is it popular?

  1. Cheap land
  2. Cheap electricity
  3. (Relatively) few natural disasters
  4. Plenty of colleges from which to draw a workforce.

It also doesn’t hurt that Iowa is thirsty for data centers to the point of offering financial incentives to two of the largest companies in the country (Microsoft and Google) to go there. Do those incentives benefit both sides? Not everyone agrees.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: