Data Center Apparatus

A SearchDataCenter.com blog covering the latest data center news and trends.


April 5, 2012  7:30 AM

Futuristic data centers: Computing the origins of the universe



Posted by: Erin Watkins

Remember CERN’s data conundrum? In a nutshell, they’ve got 15 petabytes to deal with and have run out of storage. As much of a problem as this is, apparently a new worldwide scientific enterprise called the Square Kilometer Array (SKA) is trying to one-up them in terms of data output. Well, 100-up them, to be accurate.

The new astronomical project is a radio telescope designed to see into the universe’s distant past to help answer questions about the Big Bang. To do this, it will pull in an exabyte – that’s 1 billion gigabytes – of data every day. As you can imagine, this presents a bit of a computing challenge.

Exploring the mysteries of the universe is no doubt going to take an epic data center project, and, as it turns out, a potentially new way of stacking chips in a server. According to its website, the SKA will require “100 petaflops per second processing power,” which is beyond current computing technology. Luckily, IBM and ASTRON, a Netherlands-based astronomy organization,  have created the DOME project to bring about this new world order of high performance research computing. In other words, they’re in the future!

SKA is still in its infancy and won’t be fully operational until 2024, but it sure will be interesting to watch as it grows.

[kml_flashembed movie="http://www.youtube.com/v/dvSnPhxe-8U" width="425" height="350" wmode="transparent" /]

April 2, 2012  11:40 AM

Round ‘em up and move ‘em out



Posted by: Erin Watkins

The Uptime Institute has started a fun way to encourage companies to cut down on wasted server resources. Known as the Server Roundup, the contest features lots of cowboy imagery, a silly video and a swanky belt buckle for the winners. This year’s winner, AOL, dumped 9,484 servers for a savings of more than $5 million in upkeep costs.

[kml_flashembed movie="http://www.youtube.com/v/l2jieZTASII" width="425" height="350" wmode="transparent" /]

The Uptime blog states, “Decommissioning a single 1U rack server can result in $500 per year in energy savings, an additional $500 in operating system licenses, and $1,500 in hardware maintenance costs.” When you remove several thousand servers, you can see how it all adds up.

Coming in second was NBC with 284 decommissioned servers.

The question of when to replace or add more servers is an important one every enterprise needs to address.

New technology is making it possible to do more with less, so maybe we’ll see a closer competition in next year’s roundup.


March 28, 2012  9:55 AM

Cloud computing? How about Earth computing?



Posted by: Alex Barrett
cloud computing, data center

Whenever I look up a see a beautiful sky with puffy clouds I think, “That’s where my iTunes are.”

I read this tweet from filmmaker Albert Brooks the other day, and it made me laugh, because, really, what could be further from the truth? Anyone that’s been where the iTunes really are — the data center — knows that there’s nothing puffy or ethereal about it. Data centers are grounded here on Earth, made of earth and the grimy, heavy, dirty stuff inside it.

In reality, that iTunes song is sitting in some data center in the Pacific Northwest, parsed into ones and zeroes, sitting on a spinning platter made of aluminum alloy and coated with ferro-magnetic material. Actually, make that a bunch of spinning disks. And several solid-state memory caches comprised of capacitors and transistors made from dust and sand, i.e., silicon.

The road between that data center and the iPod is long and paved with copper and glass (network cables), and it passes through countless relays (servers) made of steel, aluminum, silicon and petroleum, fueled by electricity created by burning coal buried deep under the Earth’s crust.

Eventually, that “cloud-based” iTunes song will make it to the iPod, but there’s nothing lofty or airy about its voyage (well, unless you count traveling over radio waves during a Wi-Fi sync). But like a dancer that floats through the air like a feather, data center managers are tasked with making everything look easy and light (i.e., “cloud-like”). The dancer’s graceful arabesque betrays no sign of the countless hours of practice, sore muscles and bruised and bandaged feet.

Likewise, that cloud-based iTunes song plays at the click of a button or tap of a screen but betrays nothing of the countless hours data center professionals spent stringing cable, racking servers, monitoring packets and optimizing HVAC systems.

They talk about the cloud, but we know better.


March 28, 2012  8:00 AM

We’re doing lots of science, but we can’t store the results



Posted by: Erin Watkins
data center outsourcing

Last year’s flooding in Thailand was a terrible reminder of what nature can do. Besides affecting millions of lives, it destroyed businesses and factories and left the rest of the world with an unprecedented hard drive shortage. Though this is great news for Seagate, whose factories were unaffected, it’s not so great news for the rest of us since the shortages caused prices to skyrocket. Luckily, Forbes thinks that might turn around soon.

Wooden dome at CERNImage by Paul Downey. Creative Commons License 2.0.

It can’t happen soon enough for CERN, since the short supply of hard drives is currently limiting their ability to store work. If, like me, you think the Large Hadron Collider may change the way we view matter, then limiting scientific research is almost criminal. We, and hopefully CERN, can learn from its unfortunate dilemma.

Apparently, the 15 petabyte (PB) computing load is spread out over what they call the LHC Computing Grid, but it still isn’t enough storage. Capacity planning is an important part of data center design and the servers at CERN can handle its processing and network needs with flying colors. It’s storage they lack, and outsourcing to a third party might be a good option until the hard disk shortage is gone.

There are, of course, concerns with outsourcing, especially if you’re talking about cloud providers. But the fact is that if CERN’s current Computing Grid can’t handle the load they’ve got to find somewhere that can, whether that means outsourcing the storage or outsourcing something else to free up space in their own facilities.

After all, there’s science to be done! Do you think cloud storage is the answer? Outsourcing some of its data center load to a third party? Or are the logistics of such an undertaking not worth the trouble? Weigh in in the comments below.


March 26, 2012  9:03 AM

Google cools data center with waste water



Posted by: Erin Watkins
data center cooling, data center facilities, free cooling

When you flush the toilet at your home, you expect a certain chain of events to occur. Your plumbing takes it to the sewer, the sewer takes it to a water treatment plant and finally a magical filtering process makes the water safe for the environment. Right? Well Google has turned that on its head with the latest innovation in free cooling.

Taking its lead from Google Finland’s ingenuity, a Google facility in western Georgia is now diverting waste water to its own treatment facility and using that water to cool its data center. Wired Enterprise has a more in-depth look at how it was done.

not just for windowsills.Image by Andy Melton. Creative Commons License 2.0.

As we know, using the environment to cool your data center can help save money on costs, but Google says that’s not its main motivation. The benefits also include alleviating some strain on the local waste treatment facilities and ensuring enough drinkable water during times of drought.

Of course, Google isn’t the only company building creatively. Facebook’s Prineville, Ore. facility uses air from the surrounding area to cool its data center. Free cooling is all about creativity and putting good use to your data center’s natural surroundings. But this story about Google and wastewater brings up another good point: creatively cooled data centers might not just save money; they might actually help the environment!

How can you adapt free cooling in your data center? Location is important, but as this latest innovation proves, it’s not everything. It’s definitely something to ponder as energy costs rise and regulations get tighter.


March 21, 2012  9:01 AM

Hack the memeframe, er, mainframe



Posted by: TomWalat
mainframe, mainframers

 The shrinking number of workers who can program and administer a mainframe, sometimes referred to as the “red-haired stepchild of IT,” are members of an exclusive club with a language and sense of humor all their own.

 We’ve compiled a few of our favorite mainframe-related jokes and catchphrases from the Internet. Be forewarned that some are groaners, but if you’re a mainframer — or even if you’re not — you might just chuckle.

 Old mainframers never die. They just take a dump and re-ipl.

 Elvis and Jesus are editing their programs on TSO when the system crashes. Who lost his files?
Elvis, because Jesus saves.

 It might be a mainframe:

·         If you can fit a bed and mini fridge inside.

·         If the power supply is bigger than your car.

·         If the only “mouse” it has is the one living inside it.

·         If you need earth-moving equipment to relocate it.

 You might be a mainframer:

·         If the phrase “green card” doesn’t make you think of immigration.

·         If you think all computers belong on a raised floor.

·         If your definition of “bursting” has nothing to do with too much turkey on Thanksgiving.

 A systems programmer is at lunch with some data processing auditors. The waitress asks the sysprog what he wants to order. “Pork chops,” he says. “What about the vegetables?” she asks. The sysprog says, “Oh, they can order for themselves.”

 Now it’s your turn. Mainframers who want to ride the crest of the latest meme wave, send us your jokes and other, er, “sh*t mainframers say” by adding them via the comments button on this post.

 Send longer anecdotes or links to videos relating to mainframes and the folks who love them to SearchDataCenter.com site editor Tom Walat at twalat@techtarget.com so we can compile and share them with the masses.


February 23, 2012  4:16 PM

Is IBM stealing HP server market share?



Posted by: Beth Pariseau

HP reported its fiscal first quarter 2012 earnings last night, and the results weren’t pretty.

Overall revenue of $30 billion was down 7% from the previous year and overall profit declined 44% compared with the previous year. Hewlett Packard’s (HP) Enterprise Servers, Storage and Networking (ESSN) revenues declined 10%.

“Industry standard servers revenue was down in a highly competitive environment that was compounded by the hard disk shortage,” said HP CEO Meg Whitman on the company’s earnings conference call. “Business critical system revenues also declined as we continued to address the Oracle Itanium situation.”

Adding insult to injury, IBM issued a note to press claiming it has taken substantial server business away from HP and Oracle / Sun Microsystems. IBM said it recorded nearly 2,400 competitive displacements in 2011 for its servers and storage systems. Almost 40% of the displacements came from HP and more than 25% came from Oracle/Sun, according to an IBM press release issued Wednesday.

Meanwhile, Dell also reported earnings this week. Like HP, Dell experienced a hard drive shortage during the fourth quarter of 2011, due to flooding in Thailand

The flooding ”forced us to sell lower or less configured, lower-end systems and prevented us from accessing higher-margin, more highly configured systems,” CFO Brian T. Gladden told analysts on the earnings call.

Still, Dell reported that its server and networking revenues grew 6% year-over-year.


February 1, 2012  11:21 AM

Property of “disgraced ex-media tycoon” becomes data center



Posted by: Beth Pariseau
data center

A property holdings company in the UK has an interesting hook to attract attention to its new £35 million ($56 million) data center — the property once belonged to a notorious celebrity.

PMB Holdings, a commercial real estate company founded by property developer Peter Beckwith, has demolished Maxwell House to make way for a new data center. Maxwell House was once the property of the late publishing executive and former Member of Parliament (MP) Robert Maxwell. Maxwell was the owner of an extensive publishing empire whose death in 1991 revealed wide-scale fraud.

Continued »


December 5, 2011  3:55 PM

Data center infrastructure management on the move



Posted by: Alex Barrett
Alex Barrett, data center, data center infrastructure management, DCIM

With the Gartner Data Center Conference in full swing this week in Las Vegas, several players in the data center infrastructure management (DCIM) space took the opportunity to announce new versions of their wares. Here’s a roundup of recent DCIM news.

IO offers “data center OS” as stand-alone software
IO has released DCIM software it uses in its proprietary modular data centers as stand-alone software. The IO OS “data center operating system” gathers mechanical, power, cooling and electrical usage data in real time, maintaining that data and integrating it with ticketing systems and audit trail processes. IO OS can display data center assets according to a number of perspectives – physical, logical and infrastructure – and includes views of supporting systems such as generators, switchgear, paralleling systems and chillers. With this information, IO OS provides a single pane of glass from which data center operators can establish and maintain quality of service, while optimizing data center utilization and operating costs, the company said.

Sentilla adds business planning and analytics functions
At Sentilla Corp., the holy grail of DCIM is not so much to collect information about the data center, but to do something with it. As such, the latest Sentilla 4.0 includes financial and infrastructure planning modules, plus new asset analysis capabilities. The new version also supports the ability to support multiple data centers from one interface, and its asset database features improved importing and discovery capabilities. Sentilla continues to add modeling information for systems from Dell, HP, IBM, NetApp and Sun/Oracle to its database, and offers improved support for facility infrastructure from Eaton, Emerson, APC and Schneider Electric, plus management software from BMC and HP.

iTRACS ties in with Intel Data Center Manager
iTRACS, another DCIM player, is working with Intel to integrate Intel Data Center Manager software with its Converged Physical Infrastructure Management (CPIM) suite, improving its collection, management, and analysis of CPU power, temperature, and environmental information. As a result, iTRACS will be better able to perform capacity planning, improve rack densities, identify inefficient IT assets, pinpoint cooling issues, optimize IT equipment lifecycle and prevent outages.

Let us know what you think about the story; email Alex Barrett, Executive Editor at abarrett@techtarget.com, or follow @aebarrett on twitter.


October 19, 2011  3:56 PM

Risks of data center maturity



Posted by: SteveBige01
data center, SearchDataCenter.com, Stephen Bigelow

Of all the mandates faced by an enterprise data center, the mandate of “maturity” is perhaps the most treacherous and self-defeating. The early life cycle of a data center is typically based on functional stability, making the investment in operational basics needed to keep the shop open and deliver essential IT services. These so-called operational basics include infrastructure, such as servers, storage and networks; security planning, such as Active Directory configuration and malware support; and core application support, such as Exchange Server.

But data centers mature and grow over time. It’s not enough for IT managers to deploy Web and email servers, check for alerts and sip their afternoon coffee. Every data center must “mature” to some extent so that it becomes a business partner or collaborator rather than just a cost center.

The problem is that the path to data center maturity is clouded by technologies, strategies and initiatives that wind up getting in the way of everyday things that data centers and IT staff do well. As businesses refocus their attention on things like enterprise architecture, business intelligence and project management, there is a disturbing tendency for the business (and IT) to lose focus on the underlying infrastructure and operational aspects that got IT a seat at the executive table in the first place.

The result is that IT maturity unexpectedly sputters and stops–-usually at a point just before it emerges as a real differentiator for the business. Consider the high-end systems management framework that takes 12 months to configure before it’s able to provide any useful insight, only to be obsolete and worthless three months later because it costs too much and takes too long for the IT team to keep the framework updated. Or, the new enterprise architecture project that bogs down or stalls because the necessary infrastructure documentation is lacking. Sound familiar?

Yes, every tie-wearing, desk-wielding CIO longs to reach the mountaintop–the day when their IT department can become some sort of mystical “transformational force” in the business. And yes, this type of lofty goal will demand a substantial level of IT maturity (and a substantial financial investment to match). But it’s unwise for any business to pursue maturity path strictly for its own sake. It’s more important for IT to provide value within the roles that business wants and needs–and stay focused on the basics that will facilitate future growth when a meaningful opportunity to mature finally arrives.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: