Data Center Apparatus

September 7, 2011  2:49 PM

RISC servers for data center efficiency

NicoleH Nicole Harding Profile: NicoleH


If you think RISC servers are taking a back seat to specialized mainframe transaction processors and inexpensive, general-purpose x86 processors, think again.


RISC microprocessors specialize in handling a limited and specific set of instructions and have fewer transistors, making them cheaper to use, more energy efficient and an ideal option for faster performance. The chips are most widely deployed in printers, mobile phones, video game consoles, hard drives and routers, but data centers are now paying greater attention to servers stuffed with Tilera, Intel Atom or other RISC-type processors.

Continued »

July 22, 2011  11:49 AM

Data center dilemma: To build or box?

Nick Martin Nick Martin Profile: Nick Martin

Modular data centers have been around for a few years now, but they are much different than they were just a few years ago. The Blackbox was the mysterious sounding name that Sun Microsystems gave its first containerized offering in 2006. Today, you’re much more likely to see a polished marketing phrase, such as the HP EcoPOD, used to describe a company’s containerized data center product. Modular data centers have gone from a few racks packed into a corrugated shipping container to all-in-one, custom-built proprietary modules. Hewlett-Packard, and others that have recently entered the modular data center market, are betting that containerized data centers will become more mainstream–and there are good reasons why they might be right. Improved energy efficiency and better access for technicians servicing components in modular designs are catching the attention of companies that once gave modular data centers no more than a passing glance.

But increased interest in modular data centers is also being driven by increased data center capacity. A recent Uptime Institute survey showed that 36% of data centers will run out of space or cooling capacity in the next year. Unfortunately, data center facilities have proven over the years to be largely incapable of keeping up with changing technology and growing computing needs. Higher densities are stressing the cooling infrastructure of many data centers, and there’s no guarantee that improvements made today will be enough to support future needs. By the time a state-of-the-art data center is designed, built and brought online, it will likely already have fallen behind the rapid pace of changing technology and design standards. This makes it more difficult to justify many millions of dollars for a new data center build, especially when manufacturers of containerized data centers claim their products are more energy efficient than a custom-built facility. A modular data center can add capacity to an existing environment in a matter of weeks, instead of the several years it would take to design or build an addition or new facility.

 Today, many companies’ views on containerized data centers can be compared to public school officials’ perceptions on modular classrooms–shortsighted stopgap measures that waste money when compared to new builds.  The difference is that the basic needs of students will remain virtually unchanged for the next 10 years, while the cooling and power needs of servers will likely be much different a decade from now.

However, a containerized approach may not be right for everyone. Even though a containerized approach is easily portable, data center owners can’t just drop a container in a vacant parking lot and expect it will meet their needs and remain secure. The issue of support can also be a problem for some companies where the computing platform offered by the modular container’s vendor is unfamiliar to IT staff.

In the near future, companies such as HP can’t expect to solve all problems with containerized data centers or change the industry perception that they are short-term solutions, but they can try to soften the prejudice against them. Branding a container as being energy efficient gives it appeal that many traditional data centers don’t have. As more companies look for solutions to their growing demands, the energy efficiency and simplicity of containerized data centers will look more appealing. The uncomfortable truth is that we don’t know what the needs of a data center will be 10-20 years from now, which makes the flexibility and scalability of containers an attractive options to many companies. When a modular data center is in need of a refresh, it can simply be replaced at the end of the lease.

Already, some industries are beginning to look favorably at containers. Internet-based companies that sometimes see explosive growth in computing needs can turn to modular data centers to keep up with rapidly changing capacity needs. Companies like Amazon are using containers to support cloud computing platforms.

Modular design will undoubtedly have a place in future data centers. The question is whether future developments in containerized design and technology will emerge to address the real and perceived disadvantages that are holding it back today.

June 10, 2011  9:34 AM

Raised floor resiliency

Nick Martin Nick Martin Profile: Nick Martin

Industry experts long ago predicated the demise of raised floor cooling. Today, there are viable cooling alternatives, but raised floor cooling continues to keep its hold in the data center. Just as we watched skeptically as some have unsuccessfully tried to prophesize the end of the world, we’re still waiting to see raised floors go out of style.

To be fair, data center experts who suggested raised floors would not be the cooling solution of tomorrow had much better information to back up their prediction than the man on the street corner waving an apocalypse sign. Increasing computing needs, denser racks and an increased focus on energy efficiency all seemed to signal the end of raised flooring.

The problem with raised floors is that directing the cool air beneath a raised floor isn’t always enough to meet the cooling demands of many current dense server styles. Opening the floor simply reduces the pressure of cooled air, and adding more cooling may not be a desirable (or cost-effective) answer. As point cooling and other containment tactics gain acceptance, raised floor cooling continues to be relevant in the data center.

There are now simple solutions for many of the inherent problems with raised floor cooling. Directional grates can angle chilled air at equipment to improve cooling efficiency. One complaint many data center managers cite with raised floor cooling is the inability to adjust cooling needs to changing power use and hotspots. It is simply impractical to add or move vented tiles any time cooling needs change in the data center. However, there are products that attempt to address dynamic power use and hotspots, which were once the downfall of raised floor cooling. Electronically controlled dampers, such as Tate Access Floors Inc.’s SmartAire, can limit the movement of chilled air based on inlet air temperatures to make sure the chilled air isn’t “wasted” on equipment that doesn’t need it. Although they aren’t the ideal solution, fans that can throttle up based on changing needs can help cool hotspots. When implemented correctly, these solutions can go a long way toward improving energy efficiency, which has been seen as one of the chief drawbacks of raised floor cooling.

In most cases, it’s more important to pay attention to what is happening to chilled air between the floor and ceiling. Improvements to raised floor cooling infrastructure show that administrators should spend less time looking at the type of floor they use and more time considering blanking panels and addressing overlooked problems with containment solutions.

Raised flooring isn’t the perfect cooling solution, but it certainly has a place in the modern data center. With more tools than ever allowing administrators to make the most out of the infrastructure they have, raised floors could be around for longer than anyone expects.

June 1, 2011  11:44 AM

The true costs of data center downtime

Nick Martin Nick Martin Profile: Nick Martin

IT professionals know that unplanned data center downtime is expensive, with the true costs associated with downtime often far exceeding the price of replacing faulty equipment. The time and effort spent by staff to remediate the problem is often difficult to calculate. Worse yet, extended downtime can hurt a company’s reputation and lead to lost business opportunities with financial impacts that are nearly impossible to quantify. While the cost of downtime will vary by the severity of the event, and even with the type of business experiencing an outage, a study on understanding the costs of data center downtime by Emerson Network Power and the Ponemon Institute does its best to give IT professionals and corporate executives a peek into the true costs associated with data center downtime.


The study found that the average data center downtime event costs $505,500, with the average incident lasting 90 minutes. That number is staggering. In fact, in the heat of an outage, it’s probably best not to spend too much time dwelling on the fact that every minute the data center remains down, a company is effectively losing $5,600. The study, which was published earlier this year, took statistics from 41 U.S. data centers in a range of industries, including financial institutions, healthcare companies and colocation providers. 


The survey reinforces what many IT pros likely already knew – that the majority of downtime costs don’t come from simply replacing equipment. About 62% of downtime costs reported in the study were attributed to indirect sources, such as reduced end-user productivity and lost business opportunities.

Uninterruptable power supply (UPS) system failure was the leading root cause of downtime, accounting for 29% of the outages recorded in the study. An additional 20% of the outages were related to inadequate cooling systems. Were these IT departments careless in building redundant power systems? Did they ignore the cooling capacity of their facility? Or, were they challenged by growing computing needs while also being constrained by tightening IT budgets? 


It is easy to propose cuts to the utilities line of a large IT budget. It is a far different matter to follow through on those budget reductions without adversely affecting downtime prevention and preparedness. A portion of the survey that gauged employees’ thoughts on downtime preparedness said it best. While 75% of senior-level employees felt their companies’ senior management fully supports efforts to prevent and manage unplanned outages, only 31% of supervisor-level employees agreed.

May 19, 2011  7:54 AM

Soft is hard, hard is easy

SteveBige01 Stephen Bigelow Profile: SteveBige01


So you want to be a CTO or CIO someday? Have you ever wondered what it takes to climb that IT career ladder successfully? Maybe I have an answer for you.

I recently listened to a panel of CEOs talk about the work force of the future, and among the various bits of wisdom (or just plain old wishful thinking), one panelist remarked that the mantra of a successful CTO is “soft is hard, hard is easy.”

No, this isn’t some arcane riddle that you can waste time trying to figure out between provisioning some more LUNs or patching another batch of servers. It means that successful, upward-moving IT professionals need to master some aspects of the organization that just aren’t taught in any IT curriculum.

There’s no question that an IT career takes knowledge – a LOT of knowledge. Schooling only gets your foot in the door, and the learning never stops as new technologies and products are assimilated into the business. Then there’s the full schedule of challenging projects that wedge an IT pro firmly between a tight budget and a tight timeline. It can seem like you’re caught between the devil and the deep blue sea.

But in the overall scheme of things, answers to all of those technological challenges are well within reach. There are tangible solutions to all of the hardware and application problems that you face as a technician, an administrator or a manager. It’s hard, but it’s also the easiest part of your career — hard is easy.

You see, it’s the array of other subtle “softer” challenges that can stunt your climb up the corporate ladder. Success usually comes down to a mastery of people, processes and politics.

Managing people can be more demanding than any new technology deployment. This is particularly true when it comes to managing today’s younger workers — a demographic whose proclivity for learning is matched only by their fierce disdain for traditional management structures. Identifying, developing and retaining those quality employees are not simple tricks.

Processes play an enormous role in business operations, and the ability to develop and refine processes while maintaining the support of important stakeholders within the organization can make or break a business.

Of course, the endless struggles and agendas of corporate politics remain a harsh reality – you’re not the only one trying to make it to the top.

Skills with people, processes and politics are all “soft skills,” often existing in tandem with professional capital, like your reputation and your credibility. These are also the most difficult skills to possess for IT folks that are noted for their organized, systematic and logical minds — soft is hard.

With economic conditions slowly improving, and companies looking to increase their investment in technology, IT professionals may soon see more opportunities for advancement. If you have your eye on a corner office, take stock of your skillset and remember that it might not be the hard stuff that’s holding you back. Soft is hard, hard is easy. *

May 19, 2011  7:48 AM

Questions left unanswered

SteveBige01 Stephen Bigelow Profile: SteveBige01

 I always welcome a fresh perspective, especially when it comes from professionals that know more about a topic than I do. But sometimes a new perspective can be challenging, and it can raise uncomfortable questions that might be painful to consider.

So to get my latest dose of perspective, I spent my Wednesday in Cambridge attending the MIT Sloan CIO Symposium, listening as panels of CEOs shared their thoughts and visions of technology with the CTOs, CIOs and other technology professionals in attendance.

Many of the discussions carried common themes, often touching on migration to the cloud and the shifts needed to embrace a more mobile (in fact a more global) workforce. IT figured prominently in those discussions, and CEOs extolled the virtues of agility, efficiency, cost savings and service quality improvements that they expected. Ultimately, the perception of IT must be steered away from implementing systems and supporting applications. Instead, IT should focus on providing business services to employees and users faster, easier and cheaper.

At first blush, it makes a lot of sense. IT won’t serve a business well if it retains its traditional silos and separations. The move to cloud technologies requires a shift in attitudes, along with new IT skillsets and roles, such as “cloud architects.” Cloud migration affects everything from networks to servers to storage to applications to users. And there are also numerous problems with cloud technologies that still need to be overcome, including concerns about security, performance, regulatory compliance and privacy, and ways to manage an unfathomable ocean of unstructured data.

When I started thinking about the long-term implications for IT in the enterprise, I realized that some important questions were not addressed. With all of these changes – now and on the horizon – how can IT and its professionals preserve their relevance in a modern business environment? Can IT keep a place at the table, helping shape and direct the future success of the enterprise, or is IT relegated to a fate as a line item destined for perpetual budget-cutting, along with printing costs and corporate travel expenses?

IT and technology practitioners do have a meaningful role in tomorrow’s enterprise, but it’s not the traditional hardware/software deployment and support paradigm that we see today. Tomorrow’s IT must prove its value to the business by employing metrics. It might be a matter of measuring business growth attributable to IT, gauging improvements in customer/user satisfaction or some other yardstick.

But one of the most important ways that IT can remain relevant is by identifying new technologies that can enhance the business, performing the intensive reviews and due diligence needed to evaluate the suitability of new technology, and then shepherding the organization through the adoption and development of that new technology. Just consider how platforms like netbooks and smartphones are changing the way businesses operate today.

Okay, even the savviest CEO can’t precisely define the role and influence of tomorrow’s IT department. But one thing’s for sure — IT professionals won’t sweat about adding disks to storage arrays or upgrading memory modules. *

May 5, 2011  4:55 PM

Did Oracle go too far by dropping Itanium?

Alex Barrett Alex Barrett Profile: Alex Barrett

Oracle’s shenanigans de-supporting Intel Itanium and by extension HP-UX on Integrity certainly hasn’t earned it any friends, but experts are divided on whether the database giant has gone too far.

According to a recent survey of Oracle customers by Gabriel Consulting Group, 67% said the decision to desupport Itanium changed their opinion of the database giant for the worse, compared with 27% that said their opinion was unchanged or not negatively impacted. In fact, Oracle may be even more unpopular than those numbers suggest, since many of the 27% clarified that they had thought badly of Oracle to begin with.

Oracle has never been popular with its customers, but this time, the Gabriel Consulting Group survey suggests playing hardball with HP might be the last straw.

“It’s hard to say for certain, but my sense is that [Oracle’s] actions have gotten users’ attention, and are making them think,” said Dan Olds, Gabriel Consulting Group principal. “It’s not necessarily the straw that broke the camel’s back, but they’re looking at what’s out there.”

In the software space, the survey found deep pockets of frustration with the company, in particular among Oracle database customers, 39% of which reported they were migrating or actively evaluating other platforms.

Users of Oracle’s operating systems (Solaris, Solaris x64 and Oracle Linux) are even more likely to jump ship: 51% of respondents said they were actively looking at alternatives (25%) or definitely migrating (26%).

But as much animosity as Oracle has generated, it will probably be none the worse for the wear – and will probably come out ahead, said Richard Fichera, principal analyst at Forrester Research.

“These are the actions of a company that thinks they can get away with it,” he said. In fact, Fichera said he wouldn’t be surprised if Oracle raised prices further on competitive platforms, to make the cost of its hardware relatively more attractive.

“This was a tough business move, but everyone in this business is pragmatic,” he added. “It’s tempting to say that everyone is going to punish them, but what is the cost of unraveling from Oracle? Is it worth $500,000 plus added risk? Probably not.”

April 6, 2011  10:11 AM

Dell experiments with mushrooms

Alex Barrett Alex Barrett Profile: Alex Barrett

We always knew that Michael Dell was a fungi, but caps off to him and his crew for adopting new eco-friendly packaging based on….mushrooms.

As part of Dell’s sustainable packaging strategy, Dell will start shipping some of its equipment in mushroom cushioning, wrote Oliver Campbell, Dell procurement director on the Direct2Dell blog.

Developed by the the National Science Foundation, the US EPA, and the USDA, mushroom cushioning is a unique packaging technology, Dell explained.

With it, “waste product like cotton hulls are placed in a mold which is then inoculated with mushroom spawn. Our cushions take 5 – 10 days to grow as the spawn, which become the root structure – or by the scientific name, mycelium – of the mushroom. All the energy needed to form the cushion is supplied by the carbohydrates and sugars in the ag waste. There’s no need for energy based on carbon or nuclear fuels.”

The new mushroom-based packaging is but Dell’s latest advance in earth-friendly packaging. The company already uses bamboo packaging, in which it ships select laptops and tablets. But mushroom-based packaging is better suited to heavier products like servers and desktops, the company said.

What will be the first product to ship swaddled in fungus? The PowerEdge R710 server. The company claimed it has tested the packaging extensively to ensure that it can ensure safe shipments, “and it passed like a champ.”

March 25, 2011  11:53 AM

Big Data, big data center changes?

Alex Barrett Alex Barrett Profile: Alex Barrett

Big Data will transform your data center. Then again, maybe it won’t.

I went to GigaOm’s Structure Big Data show in New York City this week to see what’s new in the world of data analysis, and there I found that forward-looking data analysts are using open source software and commodity scale-out x86 servers to solve problems usually reserved for Big Iron.

Indeed, not only was I struck by the ubiquity of open source, x86, scale-out clusters, and the like, but also the marked absense of everything else: Oracle Exadata, the big Unix vendors, etc. At least as far as the avant-garde is concerned, those are the tools of the past.

More to the point, some of the presenters managed to make having any servers at all seem somewhat quaint. In a talk enticingly titled ‘Supercomputing on a minimum wage,” Pete Warden, founder of OpenHeatMap and a former Apple programmer, told a crowd of well-heeled venture capitalists how he crawled and derived meaningful analyses of 210 million public Facebook profiles (for which he was subsequently sued) for about $100 in Amazon EC2 machine time.

“Did you know you can hire 100 servers from Amazon for $10 an hour?” he told the crowd.

Let me repeat that; A 100-node supercomputer for $10/hour.

What does this mean to data center managers? On the one hand, it means that data analysts and business intelligence folks are definitely impacting the architecture and design of the future data centers. But on the other, unless you work for a large cloud provider, the likelihood that you’ll have to design and manage those systems is relatively small.

March 24, 2011  3:24 PM

Test your backups before it’s too late

NicoleH Nicole Harding Profile: NicoleH


Check out this advice from expert Brien M. Posey on the importance of server backup.




Having reliable backups is critical to your organization’s health regardless of what type of fault-tolerant infrastructure you have in place. Fault-tolerant components such as failover clusters and redundant network hardware only protect you against hardware failure. They do nothing to guard against data loss. For example, if a user accidentally deletes a file, then your fault-tolerant hardware won’t do anything to help you get the file back.


Of course, just backing up your servers isn’t enough. You have to test the backups to make sure they work. I once saw a major organization lose all of the data from one of its Microsoft Exchange Servers because it had been backing up the server incorrectly and didn’t know it. Had the company tested its backups, it would have found the problem before it was too late to do anything about it.


It’s one thing to say that backups need to be tested, but quite another to actually test them. Create an isolated lab environment that you can use for backup testing. You don’t need to worry about using high-end servers or implementing any kind of redundancy. You just need to be able to restore your backups to some virtual machines to make sure they work.


At a minimum, your lab will most likely require a domain controller, DNS and DHCP server that mimic those used in your production environment. That’s because so many Tier 1 applications are dependent on the Active Directory. For example, suppose you want to verify the integrity of your Exchange Server backups. You couldn’t just restore Exchange to an isolated server and expect it to work. The mailbox databases would never mount because the required Active Directory infrastructure would not be present.

When I test my own backups, I like to start out by restoring the most recent backup of my PDC emulator which is also a DNS serverto a virtual server. This provides me with the infrastructure required to test other backups, but it also gives me an easy way of verifying the integrity of a domain controller backup.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: