Data Center Apparatus


December 5, 2011  3:55 PM

Data center infrastructure management on the move

Alex Barrett Alex Barrett Profile: Alex Barrett

With the Gartner Data Center Conference in full swing this week in Las Vegas, several players in the data center infrastructure management (DCIM) space took the opportunity to announce new versions of their wares. Here’s a roundup of recent DCIM news.

IO offers “data center OS” as stand-alone software
IO has released DCIM software it uses in its proprietary modular data centers as stand-alone software. The IO OS “data center operating system” gathers mechanical, power, cooling and electrical usage data in real time, maintaining that data and integrating it with ticketing systems and audit trail processes. IO OS can display data center assets according to a number of perspectives – physical, logical and infrastructure – and includes views of supporting systems such as generators, switchgear, paralleling systems and chillers. With this information, IO OS provides a single pane of glass from which data center operators can establish and maintain quality of service, while optimizing data center utilization and operating costs, the company said.

Sentilla adds business planning and analytics functions
At Sentilla Corp., the holy grail of DCIM is not so much to collect information about the data center, but to do something with it. As such, the latest Sentilla 4.0 includes financial and infrastructure planning modules, plus new asset analysis capabilities. The new version also supports the ability to support multiple data centers from one interface, and its asset database features improved importing and discovery capabilities. Sentilla continues to add modeling information for systems from Dell, HP, IBM, NetApp and Sun/Oracle to its database, and offers improved support for facility infrastructure from Eaton, Emerson, APC and Schneider Electric, plus management software from BMC and HP.

iTRACS ties in with Intel Data Center Manager
iTRACS, another DCIM player, is working with Intel to integrate Intel Data Center Manager software with its Converged Physical Infrastructure Management (CPIM) suite, improving its collection, management, and analysis of CPU power, temperature, and environmental information. As a result, iTRACS will be better able to perform capacity planning, improve rack densities, identify inefficient IT assets, pinpoint cooling issues, optimize IT equipment lifecycle and prevent outages.

Let us know what you think about the story; email Alex Barrett, Executive Editor at abarrett@techtarget.com, or follow @aebarrett on twitter.

October 19, 2011  3:56 PM

Risks of data center maturity

SteveBige01 Stephen Bigelow Profile: SteveBige01

Of all the mandates faced by an enterprise data center, the mandate of “maturity” is perhaps the most treacherous and self-defeating. The early life cycle of a data center is typically based on functional stability, making the investment in operational basics needed to keep the shop open and deliver essential IT services. These so-called operational basics include infrastructure, such as servers, storage and networks; security planning, such as Active Directory configuration and malware support; and core application support, such as Exchange Server.

But data centers mature and grow over time. It’s not enough for IT managers to deploy Web and email servers, check for alerts and sip their afternoon coffee. Every data center must “mature” to some extent so that it becomes a business partner or collaborator rather than just a cost center.

The problem is that the path to data center maturity is clouded by technologies, strategies and initiatives that wind up getting in the way of everyday things that data centers and IT staff do well. As businesses refocus their attention on things like enterprise architecture, business intelligence and project management, there is a disturbing tendency for the business (and IT) to lose focus on the underlying infrastructure and operational aspects that got IT a seat at the executive table in the first place.

The result is that IT maturity unexpectedly sputters and stops–-usually at a point just before it emerges as a real differentiator for the business. Consider the high-end systems management framework that takes 12 months to configure before it’s able to provide any useful insight, only to be obsolete and worthless three months later because it costs too much and takes too long for the IT team to keep the framework updated. Or, the new enterprise architecture project that bogs down or stalls because the necessary infrastructure documentation is lacking. Sound familiar?

Yes, every tie-wearing, desk-wielding CIO longs to reach the mountaintop–the day when their IT department can become some sort of mystical “transformational force” in the business. And yes, this type of lofty goal will demand a substantial level of IT maturity (and a substantial financial investment to match). But it’s unwise for any business to pursue maturity path strictly for its own sake. It’s more important for IT to provide value within the roles that business wants and needs–and stay focused on the basics that will facilitate future growth when a meaningful opportunity to mature finally arrives.


October 18, 2011  1:38 AM

Cisco Fabric Extender finally available for HP BladeSystem

Alex Barrett Alex Barrett Profile: Alex Barrett

HP c-Class BladeSystem shops will finally be able to connect their systems to a Cisco Unified Fabric, using the new Cisco Nexus B22 Fabric Extender (FEX) for HP announced on Friday.

This is not a corner case. Despite stiff competition, HP BladeSystem still leads the market for blade servers, as Cisco leads the enterprise 10Gb Ethernet switching market with its Nexus 5000 and 7000 switches. But competition between the two vendors over the past couple of years have shut these two worlds off from one another, as Cisco hoped to push its network customers to Unified Computing System (UCS), and HP tried to lure its BladeSystem users to its 3COM gear.

Now, the two companies appear to have reached a détente. With the Nexus B22 FEX in place, HP BladeSystem users can consolidate multiple 1Gb Ethernet links on to a single 10Gb link, reducing cabling, NICs, power consumption, and operating expenses. The FEX also enables support for Fibre Channel over Ethernet (FCoE), and enables a single point of management in to Nexus 5000 and 7000 series switches.

The Nexus B22 FEX is currently available from HP and its partners for a starting price of $9,799.


October 17, 2011  12:40 PM

IT innovation by rethinking applications

SteveBige01 Stephen Bigelow Profile: SteveBige01

As a technologist, it’s easy to focus on technology–-the servers, storage, networks and other hardware that make IT work. The problem that technologists face is that technology simply isn’t enough. The drive to manage an ever-increasing number of systems with fewer staff and tighter budgets has a point of diminishing returns.

Vendors tout tools like systems management and automation as vehicles to handle this burden, and it’s the right end game. But, unfortunately, it’s not enough by itself. Systems management, automation and other IT tools are just too complicated. Just consider how long it took your organization to select, deploy, configure and use that last software investment productively. It’s not uncommon for a software framework to take six to 12 months before an organization can use it productively. And even then, inevitable changes and reconfiguration (even patches and updates) can prove disruptive, leaving the IT organization vulnerable.

There are many innovations on the horizon for IT, but few innovations hold the promise of “people-centric software design.” IT management software designers need to take a page from other aspects of the mobile intelligent application industry and focus their efforts on context-sensitive computing. I’m not talking about a fancy new user interface. I mean software designers must rethink the way that they approach design and create a new generation of management tools with the high-level intelligence that can multiply an IT administrator’s efficiency.

We see this in the future of commercial applications. Take a picture of a gadget on a store shelf with your mobile phone, and quickly see the specs and reviews for that device, and then (based on your inquiries) receive coupons or links to other devices. There are countless other examples where application designers are developing software that makes decisions based on factors, like location, user activity patterns or search habits, and even gathers information from social media sources.

Consider an administrator responsible for 1,000 servers across three data centers. A new reporting application might look at the administrator’s location and present status information on the systems in the closest or current facility. That administrator might run performance analyses much of the time, so the new app might also present performance data on those local servers, identifying poor performers and suggesting potential fixes without being asked. Quick links to server manufacturers’ forums or social media outlets might then allow the administrator to share concerns or ask questions of the user community.

Now, I’m certainly not suggesting that IT administrators start managing their global server farm through Facebook. But, IT administrators must manage a spiraling amount of infrastructure using a greater diversity of devices. Software makers absolutely must re-imagine their IT management products in order to simplify it, make it smarter and allow busier administrators to handle more information using a greater array of mobile and tablet devices.


September 28, 2011  1:07 PM

Avoiding disaster with IT vendors

SteveBige01 Stephen Bigelow Profile: SteveBige01

The biggest impediment to successful IT operations is often IT product vendors. Modern IT hardware and software vendors seem to go out of their way to confuse buyers with outlandish claims and lock them in with draconian licensing, rather than assist them with tangible guidance and support realistic business needs in a heterogeneous setting.

I can’t count the number of times that I’ve watched vendors–often large, world-class vendors–tout their wares as a universal solution to every business problem. “Just sign here,” they say with a smile. “Our new widget will make all your problems go away. We promise.” And IT, often overworked, understaffed, and fending off pressure from the C level, foregoes some portion of due diligence, assuaged by vendor claims of interoperability and support.

And after that sparkling new product is deployed, you find that it doesn’t work the way that the vendor claims, at least, not in your environment and not without additional capital-intensive upgrades. Assuming that you manage to discuss the matter with vendor support, the onus usually winds up on you to identify the root cause. Wasted time, wasted money, frustration…sound familiar?

The reality is that the business role of IT is more important than ever, but most IT departments no longer have the staff or funding to wrestle with vendors. IT departments also cannot deal with vendors’ deployment catastrophes after the fact. IT and C-level executives simply cannot afford to risk the server farm on a vendor’s promises.

There are two crucial messages here.

First, (pay attention all you C-levels out there) IT is not about “firefighting.” The true value of IT is in its ability to identify, integrate and manage technologies that allow your business to function and generate revenue. So for the love of God, step off and let IT do its job. IT is always busy, and it may take time to evaluate the best solutions for your business problems, but IT can find the best solutions for you if you support them in that goal. Bring IT into those executive planning meetings and solicit IT feedback and input on new company initiatives. Edicts from the top almost never translate well into a data center. C-level executives that play golf with a vendor and push their new widget on IT the next day are undermining IT efforts–and jeopardizing the entire business.

Second, IT administrators will also need to show more backbone. Folding under pressure to purchase products is a guarantee of sleepless nights and long weekends, usually spent listening to Musak versions of “Muskrat Love” and waiting on hold for vendor support. Push back on vendors and talk business objectively with your own executives. Get those new products on extended loan and tackle the due diligence testing in lab and limited production environments. Compare similar products, see what works and what doesn’t, and put those vendors on the spot. Make the time up front–you won’t regret it.

Avoid the path of least resistance. It’s the only way that IT and the business will be successful.


September 19, 2011  6:38 AM

The importance of innovation in the data center

NicoleH Nicole Harding Profile: NicoleH

 

The constant evolution of data center technology makes it difficult for many business owners and IT managers to stay up to date with the latest industry trends. However, upgrading your systems and equipment is a valuable endeavor, and periodic technology refresh cycles cannot be avoided. 


Why staying current is essential

Continuing to use outdated equipment and software not only makes it harder to maintain a thriving business, it can also put you at risk for virus and security threats, lead to employee frustration and reduce overall productivity. Deploying systems with sufficient processing power, ample memory, appropriate backup devices and current software will pay off in dividends.

 

“In the IT world, being lax on up-to-date information creates a stagnant environment filled with inefficiencies. The rule is simple: Complacency in IT is extremely detrimental to both careers and data centers,” said Bill Kleyman, virtualization architect at MTM Technologies Inc., based in Stamford, Conn.

 

Although staying current with the latest trends is crucial, it isn’t necessary to deploy the most recent data center products on the market as they are released.  

 

“IT managers need to maintain full awareness of what the industry is offering so they can make prudent decisions when existing equipment limitations start to stifle business growth,” said Robert McFarlane, principal in charge of data center design at Shen Milsom & Wilke, based in New York.

 

It’s also important to remember that everything is interrelated—hardware, software, space, power and cooling. “The industry is rife with stories about IT managers who bought racks of new blade servers to save money, only to find that it cost several times their projected savings to upgrade the power and cooling systems to support them. Everything must be considered in moving forward, which requires up-to-date knowledge,” McFarlane said.

 

Balancing innovation and IT budget

But just wanting the latest technologies may not be enough. You may find that the current economic climate is a deterrent to new deployment plans. However, with proper planning, there are numerous ways that an existing budget can work in your favor, producing immediate and long-term savings.

 

“There are always great deals on servers and networking products. Smart purchases focusing on long-term savings can include buying things like a Cisco Unified Computing System blade environment to consolidate and virtualize an environment,” Kleymen said. “By removing a hardware footprint that includes old servers hosting legacy applications, you create savings in both hardware maintenance and energy consumption.”

 

Another important consideration that can affect budget is a product’s flexibility. Buying “cheap” products will only lead to premature replacements, because not all of your business and system requirements can be satisfied, added McFarlane. “Flexibility is the watchword in today’s data centers, and flexibility is what will enable an operation to keep up with demands, while also staying within budget.”

 

Selecting the right products

Choosing the right products for your data center is crucial. Deploying the wrong products quickly leads to lost revenue, decreased productivity, and might potentially injure your business. It’s also important to remember that no one product can fulfill all of your infrastructure needs.

 

“In most cases today, there is more than one way to address a need, so the ’right‘ decision may hinge as much on things like available service or existing familiarity as on technological superiority,” McFarlane said. “But the selected technology, whether for computing or infrastructure, still has to be appropriate for the job.”

 

When choosing a product, don’t choose it just because it has the same make or type that already exists in the data center, or because it has a friendly salesman, or an apparent lowest initial cost, McFarlane added.

 

“A company must conduct thorough research into a product before going with it. This includes, but certainly is not limited to, deep-dive, proof-of-concept projects, presales training, return on investment analysis, cost and benefit breakdown, integration methodology, and so on,” Kleyman agreed.

 

SearchDataCenter.com’s Product of the Year awards

Even grizzled data center gurus need help identifying the best products for their next technology refresh, and SearchDataCenter.com is here to help. Our annual Products of the Year (POY) awards evaluate and score countless new products, and select gold, silver and bronze winners in infrastructure, computing and systems management categories.

 

We expect the 2011 competition to be challenging. To win, a product must exceed expectations across judging criteria, such as performance, innovation, integration, functionality, user feedback and more. Each nomination will also be scored by a panel of independent industry judges, and we will announce the top products in January 2012.

 

Do you have a product that should be nominated for the 2011 POY? Check out the judging criteria, and then submit your online nomination form here.

 

Take a look at our SearchDataCenter.com Products of the Year 2010 award winners.

 


September 7, 2011  2:49 PM

RISC servers for data center efficiency

NicoleH Nicole Harding Profile: NicoleH

 

If you think RISC servers are taking a back seat to specialized mainframe transaction processors and inexpensive, general-purpose x86 processors, think again.

 

RISC microprocessors specialize in handling a limited and specific set of instructions and have fewer transistors, making them cheaper to use, more energy efficient and an ideal option for faster performance. The chips are most widely deployed in printers, mobile phones, video game consoles, hard drives and routers, but data centers are now paying greater attention to servers stuffed with Tilera, Intel Atom or other RISC-type processors.

Continued »


July 22, 2011  11:49 AM

Data center dilemma: To build or box?

Nick Martin Nick Martin Profile: Nick Martin

Modular data centers have been around for a few years now, but they are much different than they were just a few years ago. The Blackbox was the mysterious sounding name that Sun Microsystems gave its first containerized offering in 2006. Today, you’re much more likely to see a polished marketing phrase, such as the HP EcoPOD, used to describe a company’s containerized data center product. Modular data centers have gone from a few racks packed into a corrugated shipping container to all-in-one, custom-built proprietary modules. Hewlett-Packard, and others that have recently entered the modular data center market, are betting that containerized data centers will become more mainstream–and there are good reasons why they might be right. Improved energy efficiency and better access for technicians servicing components in modular designs are catching the attention of companies that once gave modular data centers no more than a passing glance.

But increased interest in modular data centers is also being driven by increased data center capacity. A recent Uptime Institute survey showed that 36% of data centers will run out of space or cooling capacity in the next year. Unfortunately, data center facilities have proven over the years to be largely incapable of keeping up with changing technology and growing computing needs. Higher densities are stressing the cooling infrastructure of many data centers, and there’s no guarantee that improvements made today will be enough to support future needs. By the time a state-of-the-art data center is designed, built and brought online, it will likely already have fallen behind the rapid pace of changing technology and design standards. This makes it more difficult to justify many millions of dollars for a new data center build, especially when manufacturers of containerized data centers claim their products are more energy efficient than a custom-built facility. A modular data center can add capacity to an existing environment in a matter of weeks, instead of the several years it would take to design or build an addition or new facility.

 Today, many companies’ views on containerized data centers can be compared to public school officials’ perceptions on modular classrooms–shortsighted stopgap measures that waste money when compared to new builds.  The difference is that the basic needs of students will remain virtually unchanged for the next 10 years, while the cooling and power needs of servers will likely be much different a decade from now.

However, a containerized approach may not be right for everyone. Even though a containerized approach is easily portable, data center owners can’t just drop a container in a vacant parking lot and expect it will meet their needs and remain secure. The issue of support can also be a problem for some companies where the computing platform offered by the modular container’s vendor is unfamiliar to IT staff.

In the near future, companies such as HP can’t expect to solve all problems with containerized data centers or change the industry perception that they are short-term solutions, but they can try to soften the prejudice against them. Branding a container as being energy efficient gives it appeal that many traditional data centers don’t have. As more companies look for solutions to their growing demands, the energy efficiency and simplicity of containerized data centers will look more appealing. The uncomfortable truth is that we don’t know what the needs of a data center will be 10-20 years from now, which makes the flexibility and scalability of containers an attractive options to many companies. When a modular data center is in need of a refresh, it can simply be replaced at the end of the lease.

Already, some industries are beginning to look favorably at containers. Internet-based companies that sometimes see explosive growth in computing needs can turn to modular data centers to keep up with rapidly changing capacity needs. Companies like Amazon are using containers to support cloud computing platforms.

Modular design will undoubtedly have a place in future data centers. The question is whether future developments in containerized design and technology will emerge to address the real and perceived disadvantages that are holding it back today.


June 10, 2011  9:34 AM

Raised floor resiliency

Nick Martin Nick Martin Profile: Nick Martin

Industry experts long ago predicated the demise of raised floor cooling. Today, there are viable cooling alternatives, but raised floor cooling continues to keep its hold in the data center. Just as we watched skeptically as some have unsuccessfully tried to prophesize the end of the world, we’re still waiting to see raised floors go out of style.

To be fair, data center experts who suggested raised floors would not be the cooling solution of tomorrow had much better information to back up their prediction than the man on the street corner waving an apocalypse sign. Increasing computing needs, denser racks and an increased focus on energy efficiency all seemed to signal the end of raised flooring.

The problem with raised floors is that directing the cool air beneath a raised floor isn’t always enough to meet the cooling demands of many current dense server styles. Opening the floor simply reduces the pressure of cooled air, and adding more cooling may not be a desirable (or cost-effective) answer. As point cooling and other containment tactics gain acceptance, raised floor cooling continues to be relevant in the data center.

There are now simple solutions for many of the inherent problems with raised floor cooling. Directional grates can angle chilled air at equipment to improve cooling efficiency. One complaint many data center managers cite with raised floor cooling is the inability to adjust cooling needs to changing power use and hotspots. It is simply impractical to add or move vented tiles any time cooling needs change in the data center. However, there are products that attempt to address dynamic power use and hotspots, which were once the downfall of raised floor cooling. Electronically controlled dampers, such as Tate Access Floors Inc.’s SmartAire, can limit the movement of chilled air based on inlet air temperatures to make sure the chilled air isn’t “wasted” on equipment that doesn’t need it. Although they aren’t the ideal solution, fans that can throttle up based on changing needs can help cool hotspots. When implemented correctly, these solutions can go a long way toward improving energy efficiency, which has been seen as one of the chief drawbacks of raised floor cooling.

In most cases, it’s more important to pay attention to what is happening to chilled air between the floor and ceiling. Improvements to raised floor cooling infrastructure show that administrators should spend less time looking at the type of floor they use and more time considering blanking panels and addressing overlooked problems with containment solutions.

Raised flooring isn’t the perfect cooling solution, but it certainly has a place in the modern data center. With more tools than ever allowing administrators to make the most out of the infrastructure they have, raised floors could be around for longer than anyone expects.


June 1, 2011  11:44 AM

The true costs of data center downtime

Nick Martin Nick Martin Profile: Nick Martin

IT professionals know that unplanned data center downtime is expensive, with the true costs associated with downtime often far exceeding the price of replacing faulty equipment. The time and effort spent by staff to remediate the problem is often difficult to calculate. Worse yet, extended downtime can hurt a company’s reputation and lead to lost business opportunities with financial impacts that are nearly impossible to quantify. While the cost of downtime will vary by the severity of the event, and even with the type of business experiencing an outage, a study on understanding the costs of data center downtime by Emerson Network Power and the Ponemon Institute does its best to give IT professionals and corporate executives a peek into the true costs associated with data center downtime.

 

The study found that the average data center downtime event costs $505,500, with the average incident lasting 90 minutes. That number is staggering. In fact, in the heat of an outage, it’s probably best not to spend too much time dwelling on the fact that every minute the data center remains down, a company is effectively losing $5,600. The study, which was published earlier this year, took statistics from 41 U.S. data centers in a range of industries, including financial institutions, healthcare companies and colocation providers. 

 

The survey reinforces what many IT pros likely already knew – that the majority of downtime costs don’t come from simply replacing equipment. About 62% of downtime costs reported in the study were attributed to indirect sources, such as reduced end-user productivity and lost business opportunities.

Uninterruptable power supply (UPS) system failure was the leading root cause of downtime, accounting for 29% of the outages recorded in the study. An additional 20% of the outages were related to inadequate cooling systems. Were these IT departments careless in building redundant power systems? Did they ignore the cooling capacity of their facility? Or, were they challenged by growing computing needs while also being constrained by tightening IT budgets? 

 

It is easy to propose cuts to the utilities line of a large IT budget. It is a far different matter to follow through on those budget reductions without adversely affecting downtime prevention and preparedness. A portion of the survey that gauged employees’ thoughts on downtime preparedness said it best. While 75% of senior-level employees felt their companies’ senior management fully supports efforts to prevent and manage unplanned outages, only 31% of supervisor-level employees agreed.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: