Data Center Apparatus


March 28, 2012  9:55 AM

Cloud computing? How about Earth computing?

Alex Barrett Alex Barrett Profile: Alex Barrett

Whenever I look up a see a beautiful sky with puffy clouds I think, “That’s where my iTunes are.”

I read this tweet from filmmaker Albert Brooks the other day, and it made me laugh, because, really, what could be further from the truth? Anyone that’s been where the iTunes really are — the data center — knows that there’s nothing puffy or ethereal about it. Data centers are grounded here on Earth, made of earth and the grimy, heavy, dirty stuff inside it.

In reality, that iTunes song is sitting in some data center in the Pacific Northwest, parsed into ones and zeroes, sitting on a spinning platter made of aluminum alloy and coated with ferro-magnetic material. Actually, make that a bunch of spinning disks. And several solid-state memory caches comprised of capacitors and transistors made from dust and sand, i.e., silicon.

The road between that data center and the iPod is long and paved with copper and glass (network cables), and it passes through countless relays (servers) made of steel, aluminum, silicon and petroleum, fueled by electricity created by burning coal buried deep under the Earth’s crust.

Eventually, that “cloud-based” iTunes song will make it to the iPod, but there’s nothing lofty or airy about its voyage (well, unless you count traveling over radio waves during a Wi-Fi sync). But like a dancer that floats through the air like a feather, data center managers are tasked with making everything look easy and light (i.e., “cloud-like”). The dancer’s graceful arabesque betrays no sign of the countless hours of practice, sore muscles and bruised and bandaged feet.

Likewise, that cloud-based iTunes song plays at the click of a button or tap of a screen but betrays nothing of the countless hours data center professionals spent stringing cable, racking servers, monitoring packets and optimizing HVAC systems.

They talk about the cloud, but we know better.

March 28, 2012  8:00 AM

We’re doing lots of science, but we can’t store the results

Erin Watkins Erin Watkins Profile: Erin Watkins

Last year’s flooding in Thailand was a terrible reminder of what nature can do. Besides affecting millions of lives, it destroyed businesses and factories and left the rest of the world with an unprecedented hard drive shortage. Though this is great news for Seagate, whose factories were unaffected, it’s not so great news for the rest of us since the shortages caused prices to skyrocket. Luckily, Forbes thinks that might turn around soon.

Wooden dome at CERNImage by Paul Downey. Creative Commons License 2.0.

It can’t happen soon enough for CERN, since the short supply of hard drives is currently limiting their ability to store work. If, like me, you think the Large Hadron Collider may change the way we view matter, then limiting scientific research is almost criminal. We, and hopefully CERN, can learn from its unfortunate dilemma.

Apparently, the 15 petabyte (PB) computing load is spread out over what they call the LHC Computing Grid, but it still isn’t enough storage. Capacity planning is an important part of data center design and the servers at CERN can handle its processing and network needs with flying colors. It’s storage they lack, and outsourcing to a third party might be a good option until the hard disk shortage is gone.

There are, of course, concerns with outsourcing, especially if you’re talking about cloud providers. But the fact is that if CERN’s current Computing Grid can’t handle the load they’ve got to find somewhere that can, whether that means outsourcing the storage or outsourcing something else to free up space in their own facilities.

After all, there’s science to be done! Do you think cloud storage is the answer? Outsourcing some of its data center load to a third party? Or are the logistics of such an undertaking not worth the trouble? Weigh in in the comments below.


March 26, 2012  9:03 AM

Google cools data center with waste water

Erin Watkins Erin Watkins Profile: Erin Watkins

When you flush the toilet at your home, you expect a certain chain of events to occur. Your plumbing takes it to the sewer, the sewer takes it to a water treatment plant and finally a magical filtering process makes the water safe for the environment. Right? Well Google has turned that on its head with the latest innovation in free cooling.

Taking its lead from Google Finland’s ingenuity, a Google facility in western Georgia is now diverting waste water to its own treatment facility and using that water to cool its data center. Wired Enterprise has a more in-depth look at how it was done.

not just for windowsills.Image by Andy Melton. Creative Commons License 2.0.

As we know, using the environment to cool your data center can help save money on costs, but Google says that’s not its main motivation. The benefits also include alleviating some strain on the local waste treatment facilities and ensuring enough drinkable water during times of drought.

Of course, Google isn’t the only company building creatively. Facebook’s Prineville, Ore. facility uses air from the surrounding area to cool its data center. Free cooling is all about creativity and putting good use to your data center’s natural surroundings. But this story about Google and wastewater brings up another good point: creatively cooled data centers might not just save money; they might actually help the environment!

How can you adapt free cooling in your data center? Location is important, but as this latest innovation proves, it’s not everything. It’s definitely something to ponder as energy costs rise and regulations get tighter.


March 21, 2012  9:01 AM

Hack the memeframe, er, mainframe

TomWalat Tom Walat Profile: TomWalat

 The shrinking number of workers who can program and administer a mainframe, sometimes referred to as the “red-haired stepchild of IT,” are members of an exclusive club with a language and sense of humor all their own.

 We’ve compiled a few of our favorite mainframe-related jokes and catchphrases from the Internet. Be forewarned that some are groaners, but if you’re a mainframer — or even if you’re not — you might just chuckle.

 Old mainframers never die. They just take a dump and re-ipl.

 Elvis and Jesus are editing their programs on TSO when the system crashes. Who lost his files?
Elvis, because Jesus saves.

 It might be a mainframe:

·         If you can fit a bed and mini fridge inside.

·         If the power supply is bigger than your car.

·         If the only “mouse” it has is the one living inside it.

·         If you need earth-moving equipment to relocate it.

 You might be a mainframer:

·         If the phrase “green card” doesn’t make you think of immigration.

·         If you think all computers belong on a raised floor.

·         If your definition of “bursting” has nothing to do with too much turkey on Thanksgiving.

 A systems programmer is at lunch with some data processing auditors. The waitress asks the sysprog what he wants to order. “Pork chops,” he says. “What about the vegetables?” she asks. The sysprog says, “Oh, they can order for themselves.”

 Now it’s your turn. Mainframers who want to ride the crest of the latest meme wave, send us your jokes and other, er, “sh*t mainframers say” by adding them via the comments button on this post.

 Send longer anecdotes or links to videos relating to mainframes and the folks who love them to SearchDataCenter.com site editor Tom Walat at twalat@techtarget.com so we can compile and share them with the masses.


February 23, 2012  4:16 PM

Is IBM stealing HP server market share?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

HP reported its fiscal first quarter 2012 earnings last night, and the results weren’t pretty.

Overall revenue of $30 billion was down 7% from the previous year and overall profit declined 44% compared with the previous year. Hewlett Packard’s (HP) Enterprise Servers, Storage and Networking (ESSN) revenues declined 10%.

“Industry standard servers revenue was down in a highly competitive environment that was compounded by the hard disk shortage,” said HP CEO Meg Whitman on the company’s earnings conference call. “Business critical system revenues also declined as we continued to address the Oracle Itanium situation.”

Adding insult to injury, IBM issued a note to press claiming it has taken substantial server business away from HP and Oracle / Sun Microsystems. IBM said it recorded nearly 2,400 competitive displacements in 2011 for its servers and storage systems. Almost 40% of the displacements came from HP and more than 25% came from Oracle/Sun, according to an IBM press release issued Wednesday.

Meanwhile, Dell also reported earnings this week. Like HP, Dell experienced a hard drive shortage during the fourth quarter of 2011, due to flooding in Thailand

The flooding “forced us to sell lower or less configured, lower-end systems and prevented us from accessing higher-margin, more highly configured systems,” CFO Brian T. Gladden told analysts on the earnings call.

Still, Dell reported that its server and networking revenues grew 6% year-over-year.


February 1, 2012  11:21 AM

Property of “disgraced ex-media tycoon” becomes data center

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A property holdings company in the UK has an interesting hook to attract attention to its new £35 million ($56 million) data center — the property once belonged to a notorious celebrity.

PMB Holdings, a commercial real estate company founded by property developer Peter Beckwith, has demolished Maxwell House to make way for a new data center. Maxwell House was once the property of the late publishing executive and former Member of Parliament (MP) Robert Maxwell. Maxwell was the owner of an extensive publishing empire whose death in 1991 revealed wide-scale fraud.

Continued »


December 5, 2011  3:55 PM

Data center infrastructure management on the move

Alex Barrett Alex Barrett Profile: Alex Barrett

With the Gartner Data Center Conference in full swing this week in Las Vegas, several players in the data center infrastructure management (DCIM) space took the opportunity to announce new versions of their wares. Here’s a roundup of recent DCIM news.

IO offers “data center OS” as stand-alone software
IO has released DCIM software it uses in its proprietary modular data centers as stand-alone software. The IO OS “data center operating system” gathers mechanical, power, cooling and electrical usage data in real time, maintaining that data and integrating it with ticketing systems and audit trail processes. IO OS can display data center assets according to a number of perspectives – physical, logical and infrastructure – and includes views of supporting systems such as generators, switchgear, paralleling systems and chillers. With this information, IO OS provides a single pane of glass from which data center operators can establish and maintain quality of service, while optimizing data center utilization and operating costs, the company said.

Sentilla adds business planning and analytics functions
At Sentilla Corp., the holy grail of DCIM is not so much to collect information about the data center, but to do something with it. As such, the latest Sentilla 4.0 includes financial and infrastructure planning modules, plus new asset analysis capabilities. The new version also supports the ability to support multiple data centers from one interface, and its asset database features improved importing and discovery capabilities. Sentilla continues to add modeling information for systems from Dell, HP, IBM, NetApp and Sun/Oracle to its database, and offers improved support for facility infrastructure from Eaton, Emerson, APC and Schneider Electric, plus management software from BMC and HP.

iTRACS ties in with Intel Data Center Manager
iTRACS, another DCIM player, is working with Intel to integrate Intel Data Center Manager software with its Converged Physical Infrastructure Management (CPIM) suite, improving its collection, management, and analysis of CPU power, temperature, and environmental information. As a result, iTRACS will be better able to perform capacity planning, improve rack densities, identify inefficient IT assets, pinpoint cooling issues, optimize IT equipment lifecycle and prevent outages.

Let us know what you think about the story; email Alex Barrett, Executive Editor at abarrett@techtarget.com, or follow @aebarrett on twitter.

    0 Comments     RSS Feed     Email a friend


October 19, 2011  3:56 PM

Risks of data center maturity

SteveBige01 Stephen Bigelow Profile: SteveBige01

Of all the mandates faced by an enterprise data center, the mandate of “maturity” is perhaps the most treacherous and self-defeating. The early life cycle of a data center is typically based on functional stability, making the investment in operational basics needed to keep the shop open and deliver essential IT services. These so-called operational basics include infrastructure, such as servers, storage and networks; security planning, such as Active Directory configuration and malware support; and core application support, such as Exchange Server.

But data centers mature and grow over time. It’s not enough for IT managers to deploy Web and email servers, check for alerts and sip their afternoon coffee. Every data center must “mature” to some extent so that it becomes a business partner or collaborator rather than just a cost center.

The problem is that the path to data center maturity is clouded by technologies, strategies and initiatives that wind up getting in the way of everyday things that data centers and IT staff do well. As businesses refocus their attention on things like enterprise architecture, business intelligence and project management, there is a disturbing tendency for the business (and IT) to lose focus on the underlying infrastructure and operational aspects that got IT a seat at the executive table in the first place.

The result is that IT maturity unexpectedly sputters and stops–-usually at a point just before it emerges as a real differentiator for the business. Consider the high-end systems management framework that takes 12 months to configure before it’s able to provide any useful insight, only to be obsolete and worthless three months later because it costs too much and takes too long for the IT team to keep the framework updated. Or, the new enterprise architecture project that bogs down or stalls because the necessary infrastructure documentation is lacking. Sound familiar?

Yes, every tie-wearing, desk-wielding CIO longs to reach the mountaintop–the day when their IT department can become some sort of mystical “transformational force” in the business. And yes, this type of lofty goal will demand a substantial level of IT maturity (and a substantial financial investment to match). But it’s unwise for any business to pursue maturity path strictly for its own sake. It’s more important for IT to provide value within the roles that business wants and needs–and stay focused on the basics that will facilitate future growth when a meaningful opportunity to mature finally arrives.


October 18, 2011  1:38 AM

Cisco Fabric Extender finally available for HP BladeSystem

Alex Barrett Alex Barrett Profile: Alex Barrett

HP c-Class BladeSystem shops will finally be able to connect their systems to a Cisco Unified Fabric, using the new Cisco Nexus B22 Fabric Extender (FEX) for HP announced on Friday.

This is not a corner case. Despite stiff competition, HP BladeSystem still leads the market for blade servers, as Cisco leads the enterprise 10Gb Ethernet switching market with its Nexus 5000 and 7000 switches. But competition between the two vendors over the past couple of years have shut these two worlds off from one another, as Cisco hoped to push its network customers to Unified Computing System (UCS), and HP tried to lure its BladeSystem users to its 3COM gear.

Now, the two companies appear to have reached a détente. With the Nexus B22 FEX in place, HP BladeSystem users can consolidate multiple 1Gb Ethernet links on to a single 10Gb link, reducing cabling, NICs, power consumption, and operating expenses. The FEX also enables support for Fibre Channel over Ethernet (FCoE), and enables a single point of management in to Nexus 5000 and 7000 series switches.

The Nexus B22 FEX is currently available from HP and its partners for a starting price of $9,799.


October 17, 2011  12:40 PM

IT innovation by rethinking applications

SteveBige01 Stephen Bigelow Profile: SteveBige01

As a technologist, it’s easy to focus on technology–-the servers, storage, networks and other hardware that make IT work. The problem that technologists face is that technology simply isn’t enough. The drive to manage an ever-increasing number of systems with fewer staff and tighter budgets has a point of diminishing returns.

Vendors tout tools like systems management and automation as vehicles to handle this burden, and it’s the right end game. But, unfortunately, it’s not enough by itself. Systems management, automation and other IT tools are just too complicated. Just consider how long it took your organization to select, deploy, configure and use that last software investment productively. It’s not uncommon for a software framework to take six to 12 months before an organization can use it productively. And even then, inevitable changes and reconfiguration (even patches and updates) can prove disruptive, leaving the IT organization vulnerable.

There are many innovations on the horizon for IT, but few innovations hold the promise of “people-centric software design.” IT management software designers need to take a page from other aspects of the mobile intelligent application industry and focus their efforts on context-sensitive computing. I’m not talking about a fancy new user interface. I mean software designers must rethink the way that they approach design and create a new generation of management tools with the high-level intelligence that can multiply an IT administrator’s efficiency.

We see this in the future of commercial applications. Take a picture of a gadget on a store shelf with your mobile phone, and quickly see the specs and reviews for that device, and then (based on your inquiries) receive coupons or links to other devices. There are countless other examples where application designers are developing software that makes decisions based on factors, like location, user activity patterns or search habits, and even gathers information from social media sources.

Consider an administrator responsible for 1,000 servers across three data centers. A new reporting application might look at the administrator’s location and present status information on the systems in the closest or current facility. That administrator might run performance analyses much of the time, so the new app might also present performance data on those local servers, identifying poor performers and suggesting potential fixes without being asked. Quick links to server manufacturers’ forums or social media outlets might then allow the administrator to share concerns or ask questions of the user community.

Now, I’m certainly not suggesting that IT administrators start managing their global server farm through Facebook. But, IT administrators must manage a spiraling amount of infrastructure using a greater diversity of devices. Software makers absolutely must re-imagine their IT management products in order to simplify it, make it smarter and allow busier administrators to handle more information using a greater array of mobile and tablet devices.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: