Data center facilities pro


December 16, 2008  2:04 PM

eBay to build data center in Utah, pay workers 50% over median wage

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Rich Miller at Data Center Knowledge has this post about eBay deciding on Utah for their next data center. The Deseret News writes that about 50 new jobs will be created and are expected to pay 50% above the median wage in Salt Lake County.

Some other quick details: The Utah Governor’s Office of Economic Development gave eBay a $27.3 million tax break; new state wages from the project over 10 years is projected at $23.7 million, and new state revenue for that same period is estimated at $109.1 million.

The part about 50% above median wage was what interested me. What was the Salt Lake County median wage? I wondered. The Deseret News story didn’t mention it, and as of this morning, I hadn’t yet read the Salt Lake Tribune story on eBay’s data center plans.

What I did find was a document on the GOED’s website. It’s a PDF document of the minutes of a GOED meeting in May 2007. It mentions another project by Air Liquide, a producer of industrial and medical gases. The project is bringing in 43 new jobs paying an average of $64,083, “which is 224% of the Salt Lake County median wage.”

From that I determined that the median wage in Salt Lake County is about $28,600, and that 50% above that — what eBay says will be the average for their jobs there — is about $42,900. That coincides with what the Salt Lake Tribune story said.

Just out of curiosity, I looked on the GOED site for anything regarding eBay, and found another PDF document of the minutes of a GOED meeting in June of this year. On that document it said the average salary for the eBay workers would be “over 175%” of the Salt Lake County median wage, which would be about $50,000.That’s considerably higher than $42,900.

So what gives? Data center employees are now only worth a little more than $40,000 a year?

December 8, 2008  4:20 PM

CRG West preps data centers for cloud computing customers

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

John Savageau was appointed CTO of the colocation firm CRG West back in September. We recently caught up with Savageau to discuss how the job is going so far and what trends he’s seeing in the data center market.

Tell me about your new role at CRG West so far.
Savageau: Since we’ve been rapidly growing our footprint, I spend half my time talking to customers, and half talking to our operations staff. As we continue to build out our data centers, we have a lot more flexibility to design to the 200-watt per square foot world we’re dealing with today for utility compute farms. The cloud computing community is rapidly growing. We’re building facilities to translate those requirements.

What role is cloud computing playing in your planning?
Savageau: I’ve been a grid fanatic for years since SETI @ Home, bringing large numbers of distributed CPUs together to solve problems. I think it’s important to encourage the growth of cloud computing into our data centers. Cloud computing is a marriage of grid computing and SaaS, and we had better be thinking about attracting cloud computing companies (or even doing it ourselves in the future).

The ability to have elastic computing as close to the non latency points as possible is important.

What’s the relationship between cloud computing and latency issues?
Savageau: For example, in New York or Chicago, we’d like to have the ability to create a zero latency cross-connection point, a cloud environment where trading companies can do business at an exchange point. If you’re looking at traders, latency and transactions are synonymous with loss of money. You have to talk about latency in fractions of milliseconds. Companies losing transactions are losing money.

Or in the entertainment industry, we’re talking about video on demand. The less hops that occur, the less delay you put in between the origination and the end user eyeballs, the better the experience. Lowering latency is going to be critical in a digital world.

How is cloud computing changing your data center infrastructure strategy?
Savageau: The mechanical side is easier to deal with than the switching side. It’s mostly dealing with the watts per square foot. If we look at what server deployments look like for cloud companies, we’re talking about putting in 25 racks of Verari blade servers, which will require us to have 250-amp, three phase power.

We’ve learned a lot over the past few years in regards to deploying high density rooms. Now we understand that 100 amps of 208v, three phase power is the mechanical design to meet most customer requirements. But building high density is build-to-suit.

Believe it or not, build-to-suit gives companies a really great opportunity to start thinking green. If you’re not thinking green you’re hemorrhaging money and having a negative impact on the environment. Thinking green on deployment, building the best efficiency is a religion with us now.

On the data center deployment side, strategies like cold aisle containment, extraction of heat into sealed plenums can be a huge factor in how much it costs for you and your customer.

How is cold aisle containment working out in your data centers?
Savageau: We’re doing cold aisle containment, in all of our new deployments. We looked at hot aisle containment, but we prefer to spend our energy cooling the intake side of the servers. The primary consideration is providing cool intake to servers.

When you walk into the data centers, the cold air is completely sealed. The server doesn’t care how much heat is coming in the back end. If you concentrate your efforts on the cold air you’re going to have a much happier server. Cold aisle containment can reduce the electrical draw of cooling systems up to 25%. That makes the customer really happy.


December 4, 2008  5:55 PM

ViaWest data centers go green, which makes dollars and cents

Leah Rosin Leah Rosin Profile: Leah Rosin

Data center cabinets in a ViaWest facility; Photo courtesy of ViaWest
We all know how big business operates: Each and every quarter, the bottom line must be met. This pressure on businesses creates short-term thinking and investing, less  risk taking, and enormous scrutiny of capital expenditures. All of this leads many companies to be hesitant to make big changes, even when “going green” is all the rage.

Recently I was heartened to find a company that is not as bothered with the short-term motivations of publicly traded companies, and for which going green is a meaningful proposition. ViaWest operates 16 data centers in five states (Colorado, Nevada, Oregon, Texas, and Utah), and as a private company, it can afford to invest in the long-term benefits of going green in data centers.

On a recent visit to the company’s Hillsboro, Ore. data center (formerly Fortix, acquired in 2006), I learned how passionate its staff is about sustainable operations. Jim Linkous, the vice president and general manager for the company’s Oregon data center, and Casey Vanderbeek, senior sales engineer, were happy to tell me about some of the company’s achievements in this area, as well as some future plans. The Hillsboro facility has earned Portland General Electric’s Gold status for using wind power to start with, but ViaWest has a comprehensive green initiatives plan that goes beyond buying sustainable power.

  • Utilizing clean, renewable, wind power energy programs (Colo. – Windsource; Ore. – CleanWind; Utah – Blue Sky)
  • Deployment of high efficiency cooling units which have yielded in excess of 50% in total energy savings
  • Preference toward clean chilled water systems for cooling over direct expansion systems
  • Through geographic planning, ViaWest has maximized the use of free cooling/ambient air opportunities
  • Regular Thermal analysis and reviews to ensure effective cooling throughout the data center facilities
  • Specialized Hot/Cold aisle management & ducted air return to optimize, and reduce the need for additional cooling units
  • Strict recycling programs at all ViaWest facilities – Approximately one ton of cardboard is recycled in every 90 day period
  • Utilization of high efficiency lighting throughout all ViaWest facilities
  • New construction and expansion projects that promote the use of recycled building materials
  • Server Virtulization stragey to enable long-term efficiency and decrease the average deployment size by 20%

For lack of a better term, they have a holistic approach to sustainability that others should take note of. The company’s green strategy isn’t a gimmick; it is part of their overall business strategy, and it’s paying off.

Founded in 1999, the company has state-of-the-art managed hosting and colocation facilities and continues to grow. In 2006 the company received a $31 million infusion of debt financing. This allowed the company to acquire Fortix among others. This investment has paid off, with increases in outsourcing to data centers over the past few years. ViaWest exceeded industry market growth predictions in 2007 and Linkous expressed confidence in a continuation of that growth in the coming year, despite an economic downturn. In fact, the company has already leased a nearby building at the Hillsboro campus that is being prepared with power supplies for future expansion to meet current and future customer needs. This has been spurred by the increased demand for managed hosting and colocation throughout the industry.

Along with successful growth the company continues to focus on implementing sustainable initiatives that may cost more at the outset but that are better for the environment and the bottom line over the long term. As the facilities and equipment age out, the company is upgrading the cooling systems with new, more efficient technologies. Senior Vice President of Sales Operations Steve Prather shared an example in which cooling units in the company’s Cornell, Colo,. facility were replaced with cooling towers that are 53% more efficient. The company has also taken advantage of free cooling where it is geographically appropriate.

Prather emphasized that ViaWest does not take a one-size-fits-all approach in its green initiatives throughout 16 data centers. Rather it evaluates facilities individually and optimizes efficiencies based on a location’s unique characteristics.

This stands in contrast to the company’s otherwise streamlined approach to facility management. Linkous emphasized that in large part the data center operations side of the house has been successful because of the leadership of its COO and cofounder, Nancy Phillips, who has firmly established company “best practices.” Linkous and Vanderbeek shared that these guidelines enable the facilities to run similarly smoothly, regardless of locale. Their evidence that this model is successful is the relatively low annual churn rate of ViaWest customers (less than 1%).

Whether it’s recycling cardboard (1 ton per 90 days), using windpower, taking advantage of virtualization technologies, or conducting regular thermal analysis in existing facilities, if it saves energy and resources, ViaWest is willing to spend the money to make it happen, knowing it will pay off over time.

I think that data center managers should take a look and consider if going green still really seems so hard. And if you’re at one of the “big guys” that is focused on quarterly profits and you just want to scream, make some noise, use ViaWest as an example of what you could be doing differently. At SearchDataCenter.com we like to point out these examples of green initiatives done right, because we know you might need some ammunition when you go before your board or CIO and request capital expenditures to make improvements. As always, email us with your green data center success stories.

While explaining a few of the facility’s features, Vanderbeek said,“A lot of going green is so simple: placing chillers properly, doing hot-aisle and cold-aisle containment right.” It really can be that simple. And it really can pay off.


December 3, 2008  12:03 AM

Microsoft rolls out container data center strategy for cloud computing

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Say goodbye to chillers and CRAC-units, say goodbye to raised floors and traditional disaster recovery. And say hello to the new paradigm, courtesy of Microsoft’s data center team.

Microsoft’s goal in 2008 was to shake up the data center community in a big way, starting with Mike Manos’ announcement at AFCOM that Microsoft would be deploying containerized data centers, to Christian Belady’s “Data center in a tent” experiment with a PUE of 1.0. Mission accomplished.

These guys are pushing the envelope like no one else in the industry — rabble rousing at ASHRAE TC 9.9 meetings, calling out vendors, and blogging about it every chance they get. They’re literally scaring people who have built their reputation and businesses on traditional data center design — and I don’t just mean the people selling chillers and raised flooring. These engineers are mad scientists, flipping their noses at decades of conventional wisdom.

You can read Microsoft’s proposal for yourself at Mike Manos’ blog, but the basic concept is this: data center trailers with minimal building envelop, using unconditioned outside air to cool servers. The servers will run on outside air with temperatures ranging 10-35 C and 20-80% relative humidity. “For this class of infrastructure, we eliminate generators, chillers, UPSs,” Belady wrote in the blog.

Here is a video:

Video: Microsoft Generation 4.0 Data Center Vision

The applications these servers are supporting have a built-in failover, Microsoft calls it “geo-redundancy”. If the server (or servers) die, the application automatically shifts over to another batch of servers, and Microsoft technicians replace the servers on a maintenance schedule.

For applications that demand higher redundancies, Microsoft will build more robust infrastructure. But thanks to its chargeback program, Microsoft’s business units will be less likely to adopt the more expensive higher-redundancy configurations if they can prove the bare bones approach works.

Microsoft doesn’t want you to put your data center in a tent. If you want to run big iron, have redundant components, pay big bucks for people to babysit your servers and keep them cool, that’s your business.

But they do want you to know that they plan to run all of their data centers at 1.125 PUE by 2012.


December 1, 2008  2:53 PM

Data center location and the cost of power

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

What costs more in your data center, server hardware or power? Well, it depends on your zip code.

James Hamilton, an architect at Microsoft, recently wrote about the cost of power compared to servers in large-scale data centers. He concluded that servers cost more than power in cooling — in Hamilton’s scenario, the server hardware costs three times more than power. Take a look:

Data center costs

Hamilton made available the spreadsheet on overall data center costs that he created to make the calculations.

Two assumptions caught my eye: one on the cost of power, which Hamilton put at $0.07 per kilowatt-hour (kwh); the other on the power usage effectiveness (PUE) being 1.7.

According to the Uptime Institute, the average PUE is 2.5. This average was calculated using data from Uptime’s data center user members, many of which are large companies running large data centers. So I plugged that PUE number in instead. A worse (higher) PUE number leads to higher power costs, because there is more power being wasted.

On the cost of power, I wasn’t sure whether $0.07/kwh was cheap or not. It seemed cheap to me, but I’m not that familiar with power rates. I went to a Web site run by the Energy Information Administration, a division of the U.S. Department of Energy. The EIA publishes a regular newsletter called Electric Power Monthly that has these rates. The data I looked at was in a spreadsheet called Average Retail Price of Electricity to Ultimate Customers by End-Use Sector, by State.

As it turns out, location matters a lot. The cheapest location for power is West Virginia, where the industrial rate is 4.34 cents per kwh. If you build a data center there with a PUE of 2.5 and use Hamilton’s spreadsheet,  server hardware costs three times more than power. If you build it in Hawaii, where power costs 29.51 cents per kwh, then power costs more than twice as much as server hardware.

The national industrial rate in August was 7.61 cents per kwh. Using that assumption to build a data center with a PUE of 2.5, server hardware costs about 80% more than power.


November 26, 2008  9:05 PM

Eco-efficient IT Research Director discusses conservation tactics

Suzanne Wheeler Suzanne Wheeler Profile: Suzanne Wheeler

At the recent 451 Group Third Annual Client Conference in Boston, I spoke with 451 Group Eco-efficient IT Research Director Andy Lawrence about the energy-efficiency politics, tools and financial considerations at play in the enterprise now. Data centers may soon outdo the airline industry in contributinng to global warming. To learn about how to approach energy conservation in the data center check out a video of the discussion.


November 21, 2008  12:29 AM

Green data center site selection: Cost versus sustainability

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Over the past two years, SearchDataCenter.com has published articles on the cheapest places in the U.S. to operate a data center. The data center site selection reports from 2006 and 2007 consistently rank states in the Midwest and the South as low-cost options for data center locations.

A reader recently pointed out that we write a lot about green data centers, and that these lists of data center locations are easy on the finances, but hard on the planet.

So I cross-referenced our cheapest data center locations against the power-generation profiles of each state.

It’s important to note that all forms of electricity generation have some level of environmental impact. Even “eco-friendly” hydropower, which is fueling the current data center boom in the Pacific Northwest, is driving the Pacific Salmon into extinction in the lower-48.

That said, global climate change is a much broader crisis. And with a Carbon Cap and Trade plan increasingly likely to be implemented in the U.S. under the Obama administration, greenhouse gas emissions are the most pertinent criteria to weigh environmental and financial impacts of data center location.

The top electricity fuel sources in the U.S. listed in order of largest to smallest carbon footprint are:

Coal: Around 50% of the electricity generated in the U.S. comes from coal, which is an ecological nightmare to mine (often by removing mountaintops, filling Appalachian streams with a toxic sludge). When coal is burned, carbon dioxide, sulfur dioxide, nitrogen oxides, and mercury compounds are released.

Natural gas: Burning natural gas produces far less carbon emissions than coal. Though methane, a nasty greenhouse gas, can be emitted into the air when natural gas is not burned completely or as the result of leaks and losses during transportation. Natural gas wells and pipelines can be ecologically disruptive, but have nowhere near the potential for environmental devastation that coal mining does.

Nuclear power: Accounting for nearly 20% of our electricity generation, nuclear power is practically greenhouse gas-free. If only we could figure out what to do with the spent uranium! A recent NY Times article pointed out that back in 1980, the U.S. department of energy promised utility companies that it would begin accepting nuclear waste in 1998 at Yucca Mountain in Nevada. The government began accepting payments from utilities at one-tenth of a cent per kilowatt-hour generated at their reactors. But now the DOE predicts that a waste repository will open by 2020 at the earliest, and clearing the backlog could take many decades. Because of the delay, the government will owe damages to the utilities of $11 billion or more. Oh yeah, and thanks to radioactive waste’s 10,000 year shelf-life and the seismic volatility of the region, Yucca Mountain may never open. So much for cheap clean power.

Hydropower: Green, mean salmon-extinction machine, accounting for around 6% of the nation’s electricity. It’s renewable and cost effective, but many dam operators in the West are facing hefty upgrades to equipment to make their operations more fish-friendly, and some high profile dams are being removed, instead of upgraded with modern fish ladder technologies — like the Klamath River in Calif.

The image below outlines the fuel percentages nationally for electricity generation and is from the EPA’s Clean Energy Site.

Fuel sources for electricity generation in the U.S.

Let’s look at the states. The information below was retrieved from the Energy Information Administration.

South Dakota
Sioux Falls S.D. cleaned house in 2006 and 2007 as the cheapest city to locate your data center according to The Boyd Co.’s research. Surprisingly, South Dakotans get over half their energy from hydropower. Though the majority of other half comes from coal, it’s certainly a bright spot on this list.

Florida
The Sunshine state on the other hand, does not fare quite so well. Florida didn’t make a showing on the 2006 list, but it sure cleaned house in 2007, taking six of the top ten spots. Not only is this state a hurricane prone, air-conditioning hog, it also is a huge fossil-fuels consumer. Natural gas and coal are the leading fuels for electricity production, and each typically accounts for about one-third of net generation. Nuclear and petroleum-fired power plants account for much of the remaining electricity production.

Texas
A recent Newsweek article said Texas produces more carbon emissions than most countries, but the state government and business community don’t seem too concerned. The reader who prompted me to examine this issue had a litany of complaints about coal-fired power plants in Texas, some of which you can read at StopTheCoalPlant.org. The Lone Star state gets half its electricity from coal and the other half from natural gas. It also boasts the largest amount of wind farm turbines in the U.S., but renewables aren’t making much of a dent.

Indiana
Indiana isn’t poised to win any environmental awards. A recent article in the Indiana Economic Digest pointed out that Indiana ranks first among the states for the percentage of carbon dioxide emissions from coal, and seventh largest emitter of CO2 overall.

Alabama
Birmingham scored high on both the 2006 and 2007 reports as a cheap data center location. Over half of Alabama’s power is generated by coal, but it’s also one of the biggest hydroelectric producers east of the Rocky Mountains. The state also relies heavily on nuclear power.

North Carolina
Coal is King in the financial industry hub city of Charlotte. Coal-fired power plants typically account for more than three-fifths of North Carolina’s electricity generation, and nuclear power typically accounts for about one-third. Hydroelectric and natural gas-fired power plants produce most of the remainder.

So basically, every time we check a bank statement online or our Facebook page, we’re killing the planet a little bit. Or even blogging for that matter…


November 19, 2008  4:52 PM

EU launches Code of Conduct for Data Centers

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

The Sustainable Development and Energy Innovation of the U.K.’s Department of Environment, Food and Rural Affairs, has challenged the IT industry to prevent further climate change with the official launch of the EU Code of Conduct for Data Centers on November 19.

The Code of Conduct was created in response to increasing energy consumption in data centers and the need to reduce the related environmental, economic and energy supply impacts. It was developed with collaboration from the British Computer Society, AMD, APC, Dell, Fujitsu, Gartner, HP, IBM, Intel, and many others.

Those who choose to abide by the voluntary Code of Conduct will have to implement energy efficiency best practices, meet minimum procurement standards, and report energy consumption every year.

The UK is also the first country in the world to approve legally binding climate change laws to reduce greenhouse gas emissions; data centers in the U.K. are responsible for about 3% of electricity use, and the goal is an 80% reduction in greenhouse gasses by 2050. EffectsofGlobalWarming.com

America is far behind Europe with climate change policies, but it looks like it might finally be getting its act together in terms of protecting the planet. Climate change legislation and carbon emission regulations promise to become a reality under President-elect Barack Obama, who has pledged to enact global-warming legislation.

Unfortunately, the legislation would impose a cap-and-trade system on utility companies that could raise the price of power an estimated 20% across the board, so getting as efficient as possible before the legislation takes effect would be a wise move.

To that end, vendors have come up with highly efficient servers and lower watt CPUs that perform just as well as their higher power predecessors. There is also software to control power consumption and to cap server power usage, and finally reliable virtualization software to increase server utilization, so there really are no excuses for running under-utilized systems these days (and if there are excuses, I’d love to hear them).


November 17, 2008  10:31 PM

Cloud computing versus colocation: What’s the right fit?

Suzanne Wheeler Suzanne Wheeler Profile: Suzanne Wheeler

What can the cloud do for you? That depends on your field of vision. At the 451 Group’s 3rd Annual Client Conference this week, I spoke with Antonio Piraino, a senior analyst of managed hosting at Tier1 Research, about the opportunities and disadvantages of cloud computing, managed hosting and data center colocation. Piraino said that companies’ view of these models is partly a function of size.  

Larger companies are wary of the cloud for its vagueness, Piraino said, while companies with limited resources are more receptive to the possibilities of this.  If your company has a generous supply of IT funding — an increasingly less likely possibility in this economic downturn — the cloud may be “good for your enterprise to play around with, but nothing more,” according to Dan Golding, also of Tier1 Research. Managed hosting appeals to companies that know which services they want and desire to receive them as directly as possible, Piraino said.   

Those with less funding and more flexibility, however, have more to gain from the evolving status of cloud computing. “The cloud is a developer’s dream,” said Piraino. “They can come up with any new application that a company might need, and then get Salesforce.com, Amazon EC2 [Elastic Compute Cloud] or Google App Engine  to host it. Everyone’s hoping that theirs will be the next application to gain mass popularity in the enterprise.” 

Colocation, Piraino said, suits companies that need to expand their physical hardware volume without losing their current level of administrative security. It involves a management company running the company’s software securely from anywhere in the world.  

Piraino ultimately doused the cloud exuberance with a reality check for companies of all sizes: “If you look at computing services as a car, cloud computing is like the rental car you pick up at the airport – not a good option for long-term use.” he said. “Managed hosting is better in that regard, more like a leased car. You know what you’re getting, and how much it will cost.” 


November 6, 2008  7:45 PM

27 tips for good data center design

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Last month, Techtarget held its Data Center Decisions conference in Chicago, and the second-day keynote was given by Ken Brill, the executive director and founder of The Uptime Institute. One of the things Brill said was that there are 27 points to a good hot/cold aisle design, and that most data centers only implement a handful of them.

So that got me to thinking: What are those 27 points? I got in touch with Robert “Dr. Bob” Sullivan, a staff scientist at Uptime that came up with the hot/cold aisle design back in the early 1990s. Earlier this year he wrote a paper on good data center design, and included 27 points. Not every one directly involves hot/cold aisle, but they’re all worth checking out. Aside from one general point, I’ve separated them into five groups: raised floor and overhead space, hot/cold aisle, power and cooling equipment, perforated tiles, and cabling.

Hopefully this can serve as a practical checklist for those users out there designing a new data center or retrofitting an old one.

It’s important to note that a lot of these points refer to a subfloor cooling environment, rather than overhead cooling. Here is the first general point, followed by the five groups:

1) Monitor and manage the critical parameters associated with equipment installation, by area of the computer room (no more than two building bays):

  • Space: number of cabinets and rack unit space available vs. utilized
  • Power: PDU output available vs. utilized
  • Breaker positions: available vs. utilized
  • Sensible/redundant cooling capacity available vs. utilized
  • Floor loading: acceptable weight vs. installed cabinet and equipment weight plus dead load of floor and cables, plus live load of people working in area. Compare the actual floor load with the subfloor structural strength.

Raised floor and overhead space

2) Create a raised floor master plan

3) Establish minimum raised-floor height

  • 24″ if the cabling is overhead, with no chilled water or condenser water pipes under the floor blocking the airflow
  • Recommend 30-36″ if there are airflow blockages

4) Establish a minimum clearance of 3′ from the top of the cabinets to the ceiling

5) Seal all penetrations in the subfloor and perimeter walls under the raised floor and above the dropped ceiling

Hot/cold aisle

6) Install computer and infrastructure equipment cabinets in the cold aisle/hot aisle arrangement

  • 14′ cold aisle to cold aisle separation with cabinets 42″ deep or less
  • 16′ cold aisle to cold aisle separation with cabinets > 42″ to 48″ deep

7) Utilize proper spacing of the cold aisle

  • 48″ wide with two full rows of tiles which can be removed
  • All perforated tiles are only located in the cold aisle

8 ) Utilize proper spacing of the hot aisles

  • Minimum 36″ with at least one row of tiles able to be removed
  • Do not place perforated tiles on the hot aisles

9) Ensure cabinets are installed with the front face of the frame set on a tile seam in the cold aisle

10) Require cabinet door faces to have a minimum of 50% open perforation – 65% is better

11) Prevent internal hot air recirculation by sealing the front of cabinets with blanking plates, including empty areas in the equipment-mounting surface, between the mounting rails, and the edges of the cabinets (if necessary)

Power and cooling equipment

12) Put PDUs and remote power panels in line with computer equipment cabinet rows occupying cabinet positions

13) Place cooling units at the end of the equipment rows, aligned with hot aisles where possible

14) Face cooling units in the same direction — no “circle the wagons” aka, uniformly distributed cooling

15) Limit maximum cooling unit throw distance to 50′

16) Create appropriate cooling capacity, with redundancy, in each zone of the room (zone maximum is one to two building bays)

  • Install minimum of two cooling units even if only one is needed
  • Install one-in-six to one-in-eight redundant cooling units in larger areas

17) Use only sensible cooling at 72F/45%rh when calculating the capacity of cooling units

18) Place chilled or condenser water pipes in suppressed utility trenches if the computer room is built on grade

19) Ensure all cooling units are functioning properly

  • Set points and sensitivities are consistent
  • Return air sensors are in calibration – calibrate the calibrator
  • Airflow volume is at a specific level
  • Unit is functioning properly at return air conditions
  • Unit produces at least 15 degree delta T at 100% capacity

20) Be sure the cooling unit’s blower motor is turned off if the throttling valve sticks (chilled water type units) or if a compressor fails (air conditioning type unit)

21) Adjust chilled water temperature to eliminate latent cooling

Perforated tiles

22) The maximum number of perforated tiles is the total cooling unit airflow divided by 750cfm = maximum number of perforated tiles to be installed

  • Install only the number of perforated tiles necessary to cool the load being dissipated in the cabinet/rack in the area immediately adjacent to the perf tile
  • Turn off cooling units that are not required by the heat load (except for redundant units)

23) Do not use perforated tile air flow dampers and remove all existing dampers from the bottom of perforated tiles (reduces maximum air flow by 1/3, the often close unexplainably and they potentially can produce zinc whiskers)

Cabling

24) Seal all cable cutouts and other openings in the raised floor with closures

25) Spread power cables out on the subfloor, preferably under the cold aisle to minimize airflow restrictions

26) If overhead cable racks are used, the racks should run parallel to the rows of racks. Crossover points between rows of racks should be located as far from the cooling units serving the area as practical

27) Place data cables in trays at the stringer level in the hot aisle


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: