PTS Data Center Solutions, Inc., a New Jersey-based engineering firm, will offer classes on data center facilities issues starting later this year.
The company, run by Pete Sacco, a data center engineer well known in the industry, will hold three seminars this year, in New York, Chicago and Dallas. The seminars will run three days and include classes on data center cooling, physical security, fire protection, and cabling, among others. The first is scheduled to take place in New York in September.
In a new whitepaper, flywheel uninterruptible power supply (UPS) company Active Power makes the case for, you guessed it, flywheel UPS.
The paper points to two studies, one by the Lawrence Berkley National Laboratory and the other by the Silicon Valley Leadership Group, showing that flywheel UPS systems are more efficient than traditional battery-based UPS. The company estimates that if the entire data center industry went to flywheel UPS, they could save $180 million a year.
The major concern of users with flywheels is the ride-through time, which is about 15 seconds. With batteries, the ride-through time all depends on how long the string of batteries is. Active Power responds that good generator maintenance can ensure that 15 seconds is plenty of time for ride-through time before the generators kick in.
Jack Pouchet, the director of energy initiatives for Emerson Network Power, is calling for a new metric measuring a data center’s water use as compared to its productivity (measured in Emerson’s own Compute Units per Second).
In a column in Environmental Leader, Pouchet writes that water could be “the next oil,” meaning that the availability of water is often difficult for many in the world, especially clean drinking water in developing nations. Pouchet suggests that the data center industry should be more cognizant of the water it’s using to cool IT equipment, and adding a water use metric to the current Power Usage Effectiveness (PUE) metric — which compares facility energy use to IT energy use — is a good start.
From Pouchet’s column:
It is time for the data center industry to formulate a Water Systems Productivity metric (WSP). Take useful work or even a proxy for useful work, such as the proposed Compute Units Per Second, and divide that by the amount of water used during the period. Water may be measured in units, with 1 unit equal to an acre-foot. However, gallons/liters is also acceptable.
This WSP metric would ideally be reported monthly with your other metrics. Once we start to measure and report water utilization, we will quickly realize that simply flowing more cooling water in order to “economize” may not always be the best answer. Now we will be able to have a meaningful tool to determine the ideal mix between dry-coolers, CW plants and evaporative cooling towers compared to the increased energy used with alternative solutions.
Major data center colocation company Equinix scored an energy award from the Silicon Valley Leadership Group this week for the energy-efficient design its data centers.
Equinix has received almost $1 million in rebates from the city of Santa Clara for the energy efficiency measures, which include using airside economizers and variable frequency drive (VFD) fans in their data center infrastructure equipment.
The company estimates that the airside economizers save $300,000 per year and the variable speed fans save $51,000 per year. See “Data center air-conditioning fans blow cost savings your way” and “Data center cooling: airside and waterside economizers” for more information on these cost-saving data center technologies.
Last week President Barack Obama signed a war spending bill that, among other things, gets the ball rolling on a massive National Security Agency data center that is estimated to cost $1.6 billion over the next four years.
The data center would be built at Camp Williams, a military base just south of Salt Lake City. The 200-acre data center site would have 65 megawatts of total electric load, which is about the same amount of power used by all the homes in Salt Lake City. About 30 megawatts of that total would be for the IT load, according to military documents, with room to expand up to 65 megawatts. According to government records:
Installed infrastructure will support 65MW technical load data center capacity for future expandability. The design is to be capable of Tier 3 reliability. Power density will be appropriate for current state-of-the- art high-performance computing devices and associated hardware architecture.
Congress last week passed and Obama signed the bill authorizing the start of the NSA data center construction, with initial spending approval for $169.5 million. The agency hopes to begin construction of the facility in September and finish by next May. Some other details from the project:
REQUIREMENT: This project is required to provide a 30MW technical load data center and infrastructure for 65MW technical load data center capacity to support mission. The project will include the following:
- Facility design goal will be to the highest LEED standard attainable within available resources and will include: sustainable site characteristics, water and energy efficiency, materials and resources criteria, and indoor environmental quality.
- Mechanical and electrical plants are to be housed in separate structures to prevent transfer of noise and vibrations to the data centers
- Data center technical load of 30 MW distributed across raised floor are the design parameters for the facility.
- The infrastructure support and administrative areas will be designed to support state-of-the-art high-performance computing devices and associated hardware architecture.
- Slab floor loading of approximately 1500 pounds per square foot (PSF)
- Enhancements to the building for IT and security include construction as a sensitive compartmented information facility (SCIF), as well as, requirements related to Antiterrorism Force Protection (ATFP).
- Technical load will be distributed across the data center areas.
- Seismic considerations are to be made in the facility design.
- Data center areas are to have depressed concrete slab construction with a load bearing capacity of 1500 pounds per square foot (PSF).
- Facility command and control contained in a central modular office component.
- Facility will have a loading dock with vehicle bays, three (3) of which are to be equipped with dock levelers sized to handle tractor trailers.
- Technical load capacity is 30 MW with loads distributed evenly across the data center areas.
- Supervisory Control and Data Acquisition (SCADA) to either PDU level or distribution panel level if required
- Dedicated substation for each critical UPS.
- UPS and generator backup for facility systems.
- Generators will include Selective Catalytic Reduction (SCR) pollution control equipment, chemical storage tanks and feed system.
- Chilled water system to support both air and water cooled equipment.
- Each data center area is to have air cooled and water cooled equipment with Computer Room Air Handlers (CRAHs) located external to the raised floor area. The piping headers / systems are to be designed to accommodate future expansion.
- Back-up capability for mechanical equipment.
- Cooling Towers
- Air distribution redundancy for CRAHs.
- Fire Protection – Double interlocked pre-action fire protection system for all electrical and mechanical support spaces.
- Wet pipe for administrative and raised floor areas per DOD standards.
- Video surveillance
- Intrusion detection
- Access control system
The designs for a new Yahoo! data center in western New York have changed – by 10 degrees.
The engineering firm for the future 180,000-square-foot facility told local planning officials in Lockport, N.Y., that it was changing its design plans by 10 degrees to improve cooling. This according to Orest P. Ciolko of the Wendel Duchscherer engineering firm.
“That has to do with the prevailing winds during the months we need cooling,” Ciolko told The Buffalo News. Ciolko said that computer modeling done by Yahoo!’s designers “realigned the pods so the prevailing winds will blow directly into louvers on the sides of the buildings,” according to the story.
According to the reports, the facility will be built in Maiden, N.C., a small town of about 3,300 people in the western part of the state, about an hour north of Charlotte. The $1 billion price tag is supposedly what Apple’s investment will be over the course of nine years, and the facility is expected to employ at least 50 full-time employees. The North Carolina Department of Commerce estimates that the data center will create more than 3,000 jobs total, many of them related to facility construction.
Microsoft recently hired Kevin Timmons to lead Microsoft Global Foundation Services (GFS), the company’s data center services organization. From Microsoft’s data center blog:
Kevin brings a wealth of knowledge and passion in this space, most recently serving as vice president of Operations at Yahoo!, where he led the build-out of their data centers and infrastructure. Before that he was a director of Operations at GeoCities, and prior to that he served as a senior software engineer at Marconi Dynamics.
Kevin is known as a hands-on leader with a great grasp on the issues in his field and a keen interest in increasing energy efficiency. One of the key ways he has approached that challenge was by closely measuring efficiency at each data center and using PUE (Power Usage Effectiveness) as a key metric—a strategy that helped build more efficient data centers.
Timmons was hired to replace Mike Manos, who left Microsoft earlier this year to join data center real estate company Digital Realty Trust.
Data center colocation provider CRG West changed its name to CoreSite — a Carlyle Company this week. CoreSite Senior Vice President David Dunn said the company changed the name to reflect its more unified culture, and to better identify with its parent company. Dunn said the company plans to continue to build efficient data centers that meet the needs of a majority of its customers, which are primarily Uptime Tier 3 facilities designed for concurrent maintainability.
The New York Times Magazine this weekend will have a story on the data center industry. The story, “Data Center Overload,” is already online.
Major sources in the article include Michael Manos, the former data center pro at Microsoft who is now at Digital Realty Trust, as well as Ken Brill from The Uptime Institute. The author also spoke to Chris Crosby from Digital Realty Trust, Jonathan Koomey, the Lawrence Berkley National Laboratory scientist who wrote the study a couple years ago on data center energy consumption, and other sources from Microsoft.
The only issue with the story I have is the author seems to equate the cloud with all data center infrastructure, which isn’t the case. Other than that, it’s a pretty good overview of the industry.