Data center facilities pro

September 7, 2010  6:49 PM

Digital Realty Trust claims 1.6 PUE on Northern Calif. data centers

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Data center provider and real estate company Digital Realty Trust (DRT) recently conducted an energy-efficiency analysis of its data center properties in the San Francisco and Silicon Valley areas, finding some surprising results on the measurements.

DRT audited its turnkey facilities and found that the average power usage effectiveness (PUE) was 1.6 – pretty stellar given that industry data cites that the average data center has a PUE of 2.5. 

DRT cited variable frequency drives on fans; pumps and chillers; outside air economizers; hot and cold aisle containment; and PowerVU monitoring software, among other technologies, to achieve the 1.6 PUE rating.

The PowerVU software was developed by DRT to provide a real-time dashboard for its customers in monitoring data.

Digital Realty Trust CTO Jim Smith estimates that DRT saves 10,000 KW or $6-$10 million annually, comparing  11.2 MW of IT load at a PUE of 1.6 versus at 2.5. Putting the results side-by-side allowed them to calculate the average energy and cost savings. Smith mentioned that such savings wouldn’t be possible without IT admins keeping abreast of data on a consistent basis.

“When you have access to data that lets you see and monitor energy efficiency, you can make improvements and try things that bring your PUE down,” Smith said. “That’s true whether you operate a lot of facilities like us or whether you just have a single corporate data center. Being able to measure makes it all possible.”


September 7, 2010  4:18 PM

Eaton’s plans and ambitions after Wright Line acquisiton

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

In early August, Cleveland-based Eaton Corporation purchased custom data center rack and cooling enclosure vendor Wright Line.

“The short term value to us is that we’re broadening of our portfolio. We have a robust UPS business, nice portfolio of power distribution. But we want to grow our portfolio to be a bigger solutions player,” said Ed Komoski, president of power quality at Eaton.

The Wright Line acquisition could be the next step for Eaton to compete toe-to-toe with the data center giants like Emerson (Liebert) and Schneider (APC) with a full spectrum of infrastructure.

“The customized racks and close coupled cooling will evolve over the next decade and grow,” Komoski said. “We want to build upon the thermal management that Wright Line brings us.”

Komoski said Eaton will likely keep Wright Line as the product family name, but Eaton will be the overarching brand.

Prior to the acquisition, Eaton was using Wright Line airflow management products in its own data centers.

August 25, 2010  7:13 PM

Google exec compares colocation cost to cloud computing, critics say apples to sausage

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Earlier this month, a senior Google network and systems architect engineer Vijay Gill published a blog post comparing the cost of Amazon Web Services to colocation. On Monday, Rich Miller at Data Center Knowledge directed readers’ attention to the cost analysis, where Gill’s post drew criticism for inaccuracies and assumptions in the comments.

Antonio Piraino, a research VP with Tier 1 Research called the comparison apples to sausage. “There is a point at where this is a very good exercise, but the way it was undertaken was grossly inaccurate,” Piraino said. “It’s quite risky to put something up like this public. I’m not sure why someone at Google would do this.”

In a report from Tier 1, Piraino wrote: Users would have to add their internal costs to begin making the expenditure comparison – something that few enterprises are able to accurately break out from their overall IT expenditure. Beyond this fuzzy cost versus price issue, there is no evidence of operating system, hypervisor virtualization, instance monitoring, images, billing, load balancing, storage, IP addresses (each AWS instance has its own public IP address), security and management console costs thrown on top of the colocation pricing if it were to compare with an IaaS that encompasses all of these components.

Piraino pointed out other inaccuracies. “He took Amazon Web Services at list price. If someone has a commitment, they reduce the price by 50%. Suddenly Amazon looks a lot better,” Piraino said.

A more complete exercise would certainly be worthwhile in the long term (i.e., the next three to five years) once the capex to opex, interoperability and regulatory issues have been addressed, and many of the missing components added
, Piraino wrote.

Is an apples-to-apples cost comparison of cloud to colo possible? Weigh in on the comments or @datacenterTT on Twitter.

August 24, 2010  6:21 PM

Emerson rolls out new Liebert UPS, packing more power in small spaces

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Emerson Network Power today rolled out a new Liebert UPS designed to squeeze more utilization out of pre-existing data center space.


Emerson’s Liebert GXT3 uninterruptible power supply (UPS) is available in 5-10 kVA models.  The UPS system can be installed in rack or tower configurations, and features a rotating LED display panel that can be adjusted.  Pushing more on the small space theme that Emerson is going for here, the 5 kVA and 6 kVA models require only 4U of space, while the 8 kVA and 10 kVA models use 6U.  The UPS also provides simple network management protocol (SNMP) for Web-based monitoring of the product, and a program that allows for parameters to be adjusted and the scheduling of system tests.  In addition, Emerson notes that it has built-in signals which alert the admin when battery is low or when other operating system issues arise.  The user-replaceable, hot-swappable internal batteries provide four minutes of runtime at full load.


The system is available now, according to the company.  Check out the GXT3 UPS specs.


August 20, 2010  8:41 PM

How cleaning up diesel fuel affects the data center

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

The EPA and state regulators have cleaned up diesel engine emissions by reducing the amount of allowable sulfur in fuel to less than 15 parts per million since 2008. This week The Uptime Institute published a technical paper on how biofuels are affecting data center operators.

In the new paper, The Institute also points out the problems reduced sulfur content is having on data center backup generators. According to Uptime, as the sulfur content in diesel is reduced:

-Biological growth in the fuel accelerates (sulfur is a biocide).
-The fuel holds more PPM of water.
-There is a reduced stability of oxygen in the fuel.
-And the fuel loses lubricity. The upshot? When using a biodiesel blend, the old McDonald’s fryer oil adds lubricity back into the fuel.

These are some of the tradeoffs data center operators are making for clean air. Bob Doherty wrote a tip about ultra low sulfur diesel’s affect on backup generators for in 2009.

Data center pros sound off on biodiesel

Data center manager Chuck Goolsbee wrote on Twitter: I run Datacenters, and run BioDiesel (in my cars) but I’d hesitate to run my datacenter on BioDiesel. At least right now.

In a follow up conversation, Lamonte Fortune, data center engineer at United Healthcare, wrote: “Just to be clear, I don’t think biodiesel is an evil (“Soybean-based fuels are fouling up the best-laid backup plans of some data center pros.”), just a fuel variation that people need to understand better to provide reliable operation for their generators. Misunderstanding petro-diesel usage can also get an operator in trouble. Biodiesel, however, does not need to be a bad word in fuel oil vernacular.

What do you think of biodiesel — better for Willie Neslon than backup generators? Weigh in on the comments, or on Twitter @datacenterTT.

August 18, 2010  7:19 PM

Brocade opens an extremely efficient data center

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Today, network solutions provider Brocade opened its new data center in California, powered by a very efficient cooling method.

According to Custom Mechanical Systems (CMS) and Critchfield Mechanical (CMI), the providers of the cooling technology, the in-row cooling method the data center boasts uses a whopping 75% less energy than similar designs.  Also, the data center is now the largest in the world to use this cooling method.  The in-row cooling system is paired with a water-side economizer, by way of a chilled water plant, that reduces fan usage and energy usage – about ½ a megawatt at full build-out.  The system includes ECM motors, fan redundancy, and low-profile hinged filter doors.  According to CMS and CMI, the power usage effectiveness (PUE) of data centers of this size is usually greater than 1.5 – this one’s system checks in at less than 1.3.  Here’s an image courtesy of CMS and CMI.

August 16, 2010  7:19 PM

APC rolls out new refrigerant cooling system

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Here’s another energy-efficient option for your data center – this time in the cooling department.  American Power Conversion (APC) just announced its InRow OA and Refrigerant Distribution Unit (RDU) pumped refrigerant cooling system.

APC says that the overhead cooling solution is best for medium- and large-sized data centers.  The unit can catch up to 27 kW of hot exhaust air, which it then turns into cool air to pump into the IT environment.  The OA’s thermal containment eliminates the mixing of hot and cold air, thus improving the cooling efficiency of the product.  Its overhead design also requires no whitespace since it can be mounted or suspended from a ceiling above the hot aisle.  In addition, APC mentions that the system is optimal in the case of a leak – its refrigerant is made from R134a, which is non-toxic and won’t damage a data center’s equipment. 

Some of the benefits that APC cites in the new product include 30-50% more efficiency than standard raised floor cooling, the maximizing of available floor space for IT equipment, elimination of hot spots and easy deployment/installation.  According to APC, the InRow OA and RDU units are available now.  More product specs can be found soon on APC’s website.

August 16, 2010  5:41 PM

Firehost builds data center instead of paying for colo inefficiency

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Chris Drake, CEO of secure managed hosting provider Firehost is working on building out a new data center. The company entered the market two years ago, focusing on hosting sensitive data and Websites for customers like ABC, Fossil and DHL.

Firehost is moving into an old Nortel Networks building in Dallas. Drake said he expects to begin construction in two months, opening up in the middle of 2011. Right now Firehost can use 1,500-2000 square feet of data center, and plans to do a modular build out to 2,600 sq ft of data center in the new facility.

Firehost decided to build its own data center, because Drake was sick of getting charged for colo operators’ inefficiencies.

“I’ve walked 15 data centers in the last few months, and the only people deploying energy efficient data centers are enterprises hosting their own stuff,” Drake said. “Colocation data center providers are passing costs onto their customers.”

Drake said he is paying for power inefficiencies in Firehost’s current data center, where the provider is using two 1.5 KW Liebert CRAC units to cool the servers. “It’s a waste of power to move all of that air,” Drake said. “We’re looking at in-row cooling from APC Schneider electric. Only moves air four inches, and doing hot-air containment on the backside of the cabinets. It’s one tenth the power requirements.”

The other reason Drake decided to build out instead of renting data center space was access. “We’re adding servers every week, as we grow, hopefully that will happen every day. Our engineers will need to access the data center,” Drake said. “We’re only three miles from our colocation data center. Even three miles has been a pain at night, or when things are busy.”

Check out this Q&A or more on the data center build-vs-buy debate.

August 9, 2010  6:31 PM

Former Google data center exec weighs in on build-vs.-buy debate

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

In this economic climate, should IT managers build or buy data center space? What are the metrics execs should consider when weighing their options?  Simon Tusha, former Google data center exec and CTO of Overland Park, Kan.-based colocation firm Quality Technology Services (QTS), along with QTS COO Brian Johnston, weigh in on data center outsourcing trends in this Q&A.

First off, the most obvious question deals with the build vs. buy debate.  There are probably a lot of factors that go into each option, but what are the metrics IT managers should use when deciding whether to build or rent a data center?
Some of the metrics include: 
1. What is the business’ cost of capital?
2. What are the business’ ROI targets?
3. Does the IT capital budget meet or exceed the companies’ ROI?
4. IT Managers need to put on a CFO cap and protect companies’ liquidity
5. Is data center operation revenue generating to the business or simply service?

The primary means of evaluating any outsourced IT arrangement, including data centers, is twofold: does the company have the necessary skill set and can it afford the capital outlay to undertake its own build-out? Basically, considerations of capital expense – the balance sheet – as well as core competencies are critical.

First, consider the economics. It costs at least $10-15 million per megawatt to build a state-of-the-art data center. And for some constructions, it is significantly more than that. But when you get up to 30 MW or more, the cost of each additional megawatt drops to about $4 million. So the largest data centers are extraordinarily efficient and therefore enjoy a big edge in economics over their smaller counterparts. With access to capital and liquidity being more of an issue for companies than ever before, it doesn’t make very good business sense to consume financial resources in building a small data center that will rapidly be outgrown as your business scales or is rendered obsolete as power density continues to grow. Quite simply, efficiency and economies of scale benefit larger megawatt deployments. Major players in this market see this, and that’s why they are moving to outsourced facilities if they are not consuming at least 20-30 MW of power. Additionally, building a new data center from the ground up requires huge capital outlay. From a balance sheet perspective, most companies would rather shift from a major capital outlay that affects the balance sheet to a month-to-month operating expense to outsource the data center that has far less impact on the company’s bottom line.

Second, consider the issue of core competency. Customers increasingly understand the importance of outsourcing any business elements that are not core competencies —  technology and particularly data centers are a niche topic, while outsourcing has grown rapidly. When customers consider the 24/7, 365 management of people, facilities, connectivity, power, cooling, and security, it quickly becomes evident that outsourcing such a facility and amortizing such costs over a wider variety of tenants in a multi-tenant facility is more economical, as well as more secure, resilient, redundant and reliable. So outsourced data center customers get better service at a more economical rate.

What are some of the stats on building and buying? Are there mixed building/leasing situations?
According to Frost & Sullivan, in 2010, 56% of the data center market is owned versus 44% leased. The percentages of owned space versus leased is growing closer, with 60% owned in 2009 versus 40% leased. By 2012, it is estimated that owned and leased space will be equal.

Is colocation a “launching pad” of sorts – do companies start from third-party space and work their way up to constructing their own data centers, or do you find that even huge companies have a need for leased space?
I do not think colocation is necessarily a launching pad of sorts — even large multi-national corporations may evaluate their economic needs and core competencies, and decide that outsourcing is the most efficient use of money, making it their primary means of infrastructure hosting. It really comes down to each company’s initial evaluation of what is most important to their business, what the tolerance for outsourcing non-core competencies is, and what’s most economically sound.

For smaller implementations that require less than 20-25 MW of power, regardless of the size of the company requiring the installation, colocation is typically the most economical choice. For data center installations requiring more than 25 MW, most companies will consider their own builds.

Has the economy forced a lot of people to colocate when they otherwise would have built? Do you see colocation growth in decline over the next decade with an economic rebound?
While the economic decline certainly helped data center businesses as companies were forced to do more with less and optimize their resources as a result, what’s nice about the data center market is that it is somewhat recession-resilient. With corporate technological needs outpacing the growth of data center capacity, the increasing acceptance of IT outsourcing industry-wide, and the clear business efficiencies that outsourcing the data center provides – whether for custom data centers, colocation or cloud computing – the data center market is becoming more and more relevant to customers. In hard economic times, businesses look to the data center to help cut costs by outsourcing non-core business competencies. In stronger economic times, businesses often experience growth that requires additional infrastructure, and infrastructure may also be added to support product differentiation or new corporate offerings. So regardless of economic conditions, there are strong value propositions to outsourcing within a highly secure and scalable outsourced data center.

August 6, 2010  4:04 PM

APC’s new rack PDUs boast new security, monitoring features

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

Data center energy efficiency is being touted heavily again, this time by American Power Conversion (APC), which this week rolled out its Next Generation Metered and Switched Rack Power Distribution (PDU) units, designed to help IT managers effectively manage power capacity and allow for optimal energy efficiency in their data centers.

Cool features in both the Metered AP8800 and the Switched AP8900 series include real-time monitoring of connected loads with an alarm system that can warn an IT admin of possible circuit overloads, an interactive LCD display that enhances the real-time load balance monitoring and provides optional temperature and humidity monitoring, space-saving hydraulic-magnetic breakers, remote on/off switching control of individual outlets for power cycling and sequencing, improved security features which allow IT admins to turn off unused outlets to stop unauthorized access, and a cord retention method that gets rid of the need for cable management brackets.

According to APC, the metered rack PDUs are out now.  You’ll have to wait until the third quarter of this year for the switched series.  You can read the full announcement on their website.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: