Data center facilities pro


August 5, 2010  6:36 PM

HP’s pre-fab Butterfly data center design claims cost savings

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

HP’s new prefab housing style data center design might help companies roll out data center real estate quickly and cost effectively — if you buy into HP’s Flexible DC calculations.

HP compares construction and operations costs for a hypothetical Uptime Institute Tier III, 3.2 megawatt data center in Charlotte, NC. versus a comparable HP Flexible Data Center and said the Next-gen custom data center would cost $58 million to build, versus a cool $26 million for one of HP’s designs.

The HP Butterfly is based on a design that features a central core where IT admins work, with four server rooms forming the wings of the butterfly, and cooling and backup power infrastructure around the perimeter.

The concept allows data center planners to pick what features they want off of a limited menu of options. While one size won’t fit all needs, this offering can help companies who want a data center fast, and aren’t in the business of data center design.

July 22, 2010  4:49 PM

SynapSense and Future Facilities go back to the future with latest energy-efficiency monitoring tool

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Last week, data center solution providers SynapSense and Future Facilities Limited announced a solution, available in July, that allows IT admins a glimpse into the past, present and future states of their data centers to allow for optimal capacity planning and energy management.

The solution is an integration of SynapSense’s Adaptive Control cooling capacity tool and Future Facilities’ Virtual Facility data center modeling environment.  The SynapSense tool feeds power consumption readings into the Virtual Facility tool, which provides capacity planning by calculating environmental responses of any future changes.  Together, the integration is a time machine of sorts – IT admins can get a complete sense of their data center’s performance and environmental data – past trends, real time data and calculated future results. 

Read the full release.


July 12, 2010  8:21 PM

Delta Dental data center manager on flywheel technology

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Delta Dental of Michigan opened its state-of-the-art data center in July 2009 and garnered a distinction of being one of only 19 centers worldwide to earn a Tier III design rating from the Uptime Institute.  They’ve also pumped a lot of “green” technologies into their data center, including outside air economizers and efficient cooling by way of overhead cabling to prevent airflow obstruction.  I recently spoke with Christian Briggs, Delta Dental’s data center manager, on why the company decided to deploy flywheels as part of the backup power chain.

 

When did Delta Dental actually implement flywheels in its data center?  How long had it been in the works?

Flywheels were part of the initial opening of the data center.  They were in the design of the building [construction lasted approx. 13 months, with design at about seven months].

 

Could you describe the decision-making process in using flywheels?

This was something that we looked at from the beginning.  We had previous bad experience with batteries even though we were keeping up with our maintenance.

 

In a recent case study on Delta Dental’s data center, you mentioned that batteries are front-ended with the flywheel in your data center.  Did you totally phase out batteries as a standalone UPS option, or are they still individually a component of your data center?

Our UPS system will supply power to the load side using power from the flywheels first, then transition over to the batteries.  This configuration is often referred to as a battery-hardening configuration.  It is designed to pick up the load for short power outage situations.  It also allows for a more gradual transfer of power to the batteries rather than a sudden hard drop.  The slower transition is less harsh on the batteries and will extend their life span.

 

You also mentioned that you didn’t purchase the flywheels specifically for their green aspects.  What was the main reason you decided to use the technology in your data center?  Were there any disadvantages or tradeoffs in using flywheels compared to what you had in place before?

Many of the “green aspects” didn’t apply since we were also going to purchase and install batteries.  The main reason was to harden the battery system.  The main disadvantage with the flywheel is that systems currently on the market can only store enough kinetic energy to last for about a minute.  That isn’t enough time to coordinate an orderly shutdown, but it is enough time to transfer on to generators.  We have and continue to develop a very rigorous testing program for our emergency power supply chain [Delta Dental utilizes switch gear, paralleling gear and generators], but even with that, there is always the chance that the breaker that worked just fine in a test the day before trips on the day that you need it.  When that happens, the data center has about 40 minutes to orderly shut down the most critical portions of our business.


May 6, 2010  1:27 PM

Uptime Institute makes change to data center availability tiers

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Last year the Uptime Institute said it was working on making changes to its famous data center availability tiers, thanks largely to an end users advisory committee that was working on the issue. Well, it now has its first change.

Uptime’s availability rating system, which consists of four tiers with increasing uptime expectations, has been the de facto standard for data center availability in the industry. But some have criticized the tier system, saying it isn’t as flexible as it should be and needs to be updated.

The change is a new minimum requirement of 12 hours of fuel storage for the backup generator. The change applies to all tiers, 1 through 4, and went into effect May 1. Data centers that have already been Uptime certified or started the process of certification before May 1 are grandfathered in without the new requirement.

The change was decided upon by The Uptime Institute’s Owners Advisory Committee, which includes 29 data center owners working on modifications to the data center availability tier program. There were other votes the committee made that don’t result in changes to the tiers; for example, the committee voted against mandating a minimum ride-through time for a data center’s uninterruptible power supply (UPS).

It’s no surprise that the OAC decided to tackle the Tier 1 rating first, as that was its stated goal when it first started. A Tier 1 data center is typically considered bare bones, but the committee still wanted to distinguish it from a server closer or just a server rack in an office. Requiring a minimum amount of backup generator fuel storage works to that end.


April 12, 2010  7:52 PM

Data center leaders: ASHRAE standard flawed

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

A group of six leaders in the data center industry say that a new standard by the American Society of Heating, Refrigerating and Air-conditioning Engineers is too “prescriptive” in its attempt to promote data center cooling efficiency, in part because of its support of airside economizers for all data centers, rather than just for those where the technology fits best.

The data center users involved include:

  • Chris Crosby, Senior Vice President, Digital Realty Trust
  • Hossein Fateh, President and Chief Executive Officer, Dupont Fabros Technology
  • James Hamilton, Vice President and Distinguished Engineer, Amazon
  • Urs Hoelzle, Senior Vice President, Operations and Google Fellow, Google
  • Mike Manos, Vice President, Service Operations, Nokia
  • Kevin Timmons, General Manager, Datacenter Services, Microsoft

Recently, ASHRAE added data centers to its building efficiency standard. The standard defines energy efficiency for most types of buildings in the United States, and is “often incorporated into building codes across the country,” according to a statement on setting efficiency goals for data centers released by the group of six. Here is what they had to say specifically regarding airside economizers:

In many cases, economizers are a great way to cool a data center (in fact, many of our companies’ data centers use them extensively), but simply requiring their use doesn’t guarantee an efficient system, and they may not be the best choice. Future cooling methods may achieve the same or better results without the use of economizers altogether. An efficiency standard should not prohibit such innovation.

Hamilton, on his blogs, expounds on the issue, saying that ASHRAE is tackling the right problem, but with the wrong approach. He gives credit to ASHRAE for going after data center efficiency, and only suggests that its efficiency standard be performance-based, rather than prescriptive. A former auto mechanic, Hamilton gave an analogy on how performance-based requirements work wonders:

Ironically, as emission standards forced more precise engine management, both fuel economy and power density has improved as well. Initially both suffered as did drivability but competition brought many innovations to market and we ended up seeing emissions compliance to increasingly strict standards at the same time that both power density and fuel economy improved.

I have asked for a response from the ASHRAE Technical Committee 9.9 chairman, Fred Stack, but haven’t heard back yet. As soon as I hear from him, I’ll post his response. Hat tip to Hamilton for bringing this statement to my attention with his blog.


March 30, 2010  2:17 PM

Data center colo/IBX Equinix opens its 50th, yes 50th, data center

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Equinix's LD5 data centerData center colocation and International Business Exchange (IBX) provider Equinix has opened its 50th data center, an almost 300,000-square-foot facility in London. It was only 10 years ago that Equinix opened its first data center. Now it’s on its 50th. That is some growth.

The new data center, coined LD5, sits on the same campus as another Equinix data center called LD4, which is at 90% occupancy. The two are connected with more than 1,000 dark fiber links. The new facility will be built in phases, with the initial stage delivering about 75,000 square feet of data center space. It has energy efficiency features such as fresh air cooling for its UPS and plant equipment, along with variable speed secondary chilled water pumps and high-efficiency fans in the CRAC units. During cooler months, outside air can also be used to cool the data center itself.

Once built out, LD5 will also be Equinix’s largest facility in the U.K. and its second-largest in Europe, according to Russell Poole, Equinix’s U.K. general manager. Its anchor tenant at LD5 would be T-Systems, which is the outsourcing division of Deutsche Telekom.


March 24, 2010  6:54 PM

Wikipedia outage caused by overheating data center

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

A cooling problem in one of Wikipedia’s European data centers caused overheating in the facility that cascaded and eventually caused global outages to the site today, officials from the company reported.

“As this impacted all Wikipedia and other projects access from European users, we were forced to move all user traffic to our Florida cluster, for which we have a standard quick failover procedure in place, that changes our DNS entries,” according to the statement. It continued:

However, shortly after we did this failover switch, it turned out that this failover mechanism was now broken, causing the DNS resolution of Wikimedia sites to stop working globally. This problem was quickly resolved, but unfortunately it may take up to an hour before access is restored for everyone, due to caching effects.

We apologize for the inconvenience this has caused.


March 24, 2010  1:59 PM

One of world’s largest data centers open in South Wales

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

A company called Next Generation Data in Wales has built one of the largest data centers in the world — 750,000 square feet. That amounts to about 17 acres of space in the facility, which used to be a semiconductor plant.

The data center is a three-story facility, with half of it being technical space capable of holding about 17,000 cabinets of IT equipment. Its size puts it on par with the world’s largest data centers. The only one I could find that was equivalent size was NAP of the Americas, a colo facility run by Terremark in Miami.


March 24, 2010  12:44 PM

Putting together one of Microsoft’s “ITPAC” data centers

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

In a blog yesterday morning, Microsoft General Manager of Datacenter Services Kevin Timmons described the company’s new modular data centers. It calls them ITPACs, which is short for IT Pre-Assembled Components. The video is below:

Get Microsoft Silverlight

The WMV file, if you don’t have Silverlight or don’t feel like downloading it, is here. The video basically shows how the ITPAC is put together like a Lego structure, and it goes like this: base, frame, flooring, server racks, transformer, control panel, partitions, fans, exhaust dampers, bus bars, servers, lights, mixing dampers, air washer drain pans, air washer supports, air washers, intake louvers, filters, skin. The unit is built to take advantage of outside air cooling as much as possible, with server intake air ranging from about 50 to 90 degrees Fahrenheit.

“Using fans to create a negative pressure for cooling, ambient air can be pulled through on one side, run through the servers and exhausted out the other, with some of the air re-circulated to even the overall temperature of the unit,” Timmons wrote.

He added that by being able to use no mechanical cooling units, the power usage effectiveness (PUE) will range from 1.15 to 1.38, which is top of the line. Timmons continued:

Our development team is considering a number of different sizes of ITPACs in order to make the units easily shippable, and they could contain approximately 400 to 2,500 compute servers and draw 200 to 600 kilowatts depending on the server mix between compute and storage. Another exciting development is that in our research and development, we found that with automation a single person could build one of these units in only four days.


March 17, 2010  1:38 PM

Saving copper in the data center

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio
Picture of copper spool from Jack Hess

Photo courtesy of Jack Hess at Flickr

Sometimes saving millions of dollars in a data center design is as simple as figuring out a way to use less copper. That’s how it worked for Australian data center colocation company Polaris.

Mike Andrea, director of the strategic directions group at the company, explained at AFCOM’s Data Center World show in Nashville last week how his company saved about $30 million in copper costs when building its new 65,000-square-foot facility.

“We just went multistory rather than one story,” he said, describing how the $200 million colo facility has five floors of IT and data center facility space. “We did it because of the cost of copper.”

The facility officially opened in February, with Andrea saying it was 89% leased by the time construction was completed. He also described the redundancy and modularity built into the facility, which Polaris designed into it from day 1.

See some of our other coverage from Data Center World:


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: