As a technologist, it’s easy to focus on technology–-the servers, storage, networks and other hardware that make IT work. The problem that technologists face is that technology simply isn’t enough. The drive to manage an ever-increasing number of systems with fewer staff and tighter budgets has a point of diminishing returns.
Vendors tout tools like systems management and automation as vehicles to handle this burden, and it’s the right end game. But, unfortunately, it’s not enough by itself. Systems management, automation and other IT tools are just too complicated. Just consider how long it took your organization to select, deploy, configure and use that last software investment productively. It’s not uncommon for a software framework to take six to 12 months before an organization can use it productively. And even then, inevitable changes and reconfiguration (even patches and updates) can prove disruptive, leaving the IT organization vulnerable.
There are many innovations on the horizon for IT, but few innovations hold the promise of “people-centric software design.” IT management software designers need to take a page from other aspects of the mobile intelligent application industry and focus their efforts on context-sensitive computing. I’m not talking about a fancy new user interface. I mean software designers must rethink the way that they approach design and create a new generation of management tools with the high-level intelligence that can multiply an IT administrator’s efficiency.
We see this in the future of commercial applications. Take a picture of a gadget on a store shelf with your mobile phone, and quickly see the specs and reviews for that device, and then (based on your inquiries) receive coupons or links to other devices. There are countless other examples where application designers are developing software that makes decisions based on factors, like location, user activity patterns or search habits, and even gathers information from social media sources.
Consider an administrator responsible for 1,000 servers across three data centers. A new reporting application might look at the administrator’s location and present status information on the systems in the closest or current facility. That administrator might run performance analyses much of the time, so the new app might also present performance data on those local servers, identifying poor performers and suggesting potential fixes without being asked. Quick links to server manufacturers’ forums or social media outlets might then allow the administrator to share concerns or ask questions of the user community.
Now, I’m certainly not suggesting that IT administrators start managing their global server farm through Facebook. But, IT administrators must manage a spiraling amount of infrastructure using a greater diversity of devices. Software makers absolutely must re-imagine their IT management products in order to simplify it, make it smarter and allow busier administrators to handle more information using a greater array of mobile and tablet devices.
The biggest impediment to successful IT operations is often IT product vendors. Modern IT hardware and software vendors seem to go out of their way to confuse buyers with outlandish claims and lock them in with draconian licensing, rather than assist them with tangible guidance and support realistic business needs in a heterogeneous setting.
I can’t count the number of times that I’ve watched vendors–often large, world-class vendors–tout their wares as a universal solution to every business problem. “Just sign here,” they say with a smile. “Our new widget will make all your problems go away. We promise.” And IT, often overworked, understaffed, and fending off pressure from the C level, foregoes some portion of due diligence, assuaged by vendor claims of interoperability and support.
And after that sparkling new product is deployed, you find that it doesn’t work the way that the vendor claims, at least, not in your environment and not without additional capital-intensive upgrades. Assuming that you manage to discuss the matter with vendor support, the onus usually winds up on you to identify the root cause. Wasted time, wasted money, frustration…sound familiar?
The reality is that the business role of IT is more important than ever, but most IT departments no longer have the staff or funding to wrestle with vendors. IT departments also cannot deal with vendors’ deployment catastrophes after the fact. IT and C-level executives simply cannot afford to risk the server farm on a vendor’s promises.
There are two crucial messages here.
First, (pay attention all you C-levels out there) IT is not about “firefighting.” The true value of IT is in its ability to identify, integrate and manage technologies that allow your business to function and generate revenue. So for the love of God, step off and let IT do its job. IT is always busy, and it may take time to evaluate the best solutions for your business problems, but IT can find the best solutions for you if you support them in that goal. Bring IT into those executive planning meetings and solicit IT feedback and input on new company initiatives. Edicts from the top almost never translate well into a data center. C-level executives that play golf with a vendor and push their new widget on IT the next day are undermining IT efforts–and jeopardizing the entire business.
Second, IT administrators will also need to show more backbone. Folding under pressure to purchase products is a guarantee of sleepless nights and long weekends, usually spent listening to Musak versions of “Muskrat Love” and waiting on hold for vendor support. Push back on vendors and talk business objectively with your own executives. Get those new products on extended loan and tackle the due diligence testing in lab and limited production environments. Compare similar products, see what works and what doesn’t, and put those vendors on the spot. Make the time up front–you won’t regret it.
Avoid the path of least resistance. It’s the only way that IT and the business will be successful.
The constant evolution of data center technology makes it difficult for many business owners and IT managers to stay up to date with the latest industry trends. However, upgrading your systems and equipment is a valuable endeavor, and periodic technology refresh cycles cannot be avoided.
Why staying current is essential
Continuing to use outdated equipment and software not only makes it harder to maintain a thriving business, it can also put you at risk for virus and security threats, lead to employee frustration and reduce overall productivity. Deploying systems with sufficient processing power, ample memory, appropriate backup devices and current software will pay off in dividends.
“In the IT world, being lax on up-to-date information creates a stagnant environment filled with inefficiencies. The rule is simple: Complacency in IT is extremely detrimental to both careers and data centers,” said Bill Kleyman, virtualization architect at MTM Technologies Inc., based in Stamford, Conn.
Although staying current with the latest trends is crucial, it isn’t necessary to deploy the most recent data center products on the market as they are released.
“IT managers need to maintain full awareness of what the industry is offering so they can make prudent decisions when existing equipment limitations start to stifle business growth,” said Robert McFarlane, principal in charge of data center design at Shen Milsom & Wilke, based in New York.
It’s also important to remember that everything is interrelated—hardware, software, space, power and cooling. “The industry is rife with stories about IT managers who bought racks of new blade servers to save money, only to find that it cost several times their projected savings to upgrade the power and cooling systems to support them. Everything must be considered in moving forward, which requires up-to-date knowledge,” McFarlane said.
Balancing innovation and IT budget
But just wanting the latest technologies may not be enough. You may find that the current economic climate is a deterrent to new deployment plans. However, with proper planning, there are numerous ways that an existing budget can work in your favor, producing immediate and long-term savings.
“There are always great deals on servers and networking products. Smart purchases focusing on long-term savings can include buying things like a Cisco Unified Computing System blade environment to consolidate and virtualize an environment,” Kleymen said. “By removing a hardware footprint that includes old servers hosting legacy applications, you create savings in both hardware maintenance and energy consumption.”
Another important consideration that can affect budget is a product’s flexibility. Buying “cheap” products will only lead to premature replacements, because not all of your business and system requirements can be satisfied, added McFarlane. “Flexibility is the watchword in today’s data centers, and flexibility is what will enable an operation to keep up with demands, while also staying within budget.”
Selecting the right products
Choosing the right products for your data center is crucial. Deploying the wrong products quickly leads to lost revenue, decreased productivity, and might potentially injure your business. It’s also important to remember that no one product can fulfill all of your infrastructure needs.
“In most cases today, there is more than one way to address a need, so the ’right‘ decision may hinge as much on things like available service or existing familiarity as on technological superiority,” McFarlane said. “But the selected technology, whether for computing or infrastructure, still has to be appropriate for the job.”
When choosing a product, don’t choose it just because it has the same make or type that already exists in the data center, or because it has a friendly salesman, or an apparent lowest initial cost, McFarlane added.
“A company must conduct thorough research into a product before going with it. This includes, but certainly is not limited to, deep-dive, proof-of-concept projects, presales training, return on investment analysis, cost and benefit breakdown, integration methodology, and so on,” Kleyman agreed.
SearchDataCenter.com’s Product of the Year awards
Even grizzled data center gurus need help identifying the best products for their next technology refresh, and SearchDataCenter.com is here to help. Our annual Products of the Year (POY) awards evaluate and score countless new products, and select gold, silver and bronze winners in infrastructure, computing and systems management categories.
We expect the 2011 competition to be challenging. To win, a product must exceed expectations across judging criteria, such as performance, innovation, integration, functionality, user feedback and more. Each nomination will also be scored by a panel of independent industry judges, and we will announce the top products in January 2012.
Do you have a product that should be nominated for the 2011 POY? Check out the judging criteria, and then submit your online nomination form here.
Take a look at our SearchDataCenter.com Products of the Year 2010 award winners.
If you think RISC servers are taking a back seat to specialized mainframe transaction processors and inexpensive, general-purpose x86 processors, think again.
RISC microprocessors specialize in handling a limited and specific set of instructions and have fewer transistors, making them cheaper to use, more energy efficient and an ideal option for faster performance. The chips are most widely deployed in printers, mobile phones, video game consoles, hard drives and routers, but data centers are now paying greater attention to servers stuffed with Tilera, Intel Atom or other RISC-type processors.
Modular data centers have been around for a few years now, but they are much different than they were just a few years ago. The Blackbox was the mysterious sounding name that Sun Microsystems gave its first containerized offering in 2006. Today, you’re much more likely to see a polished marketing phrase, such as the HP EcoPOD, used to describe a company’s containerized data center product. Modular data centers have gone from a few racks packed into a corrugated shipping container to all-in-one, custom-built proprietary modules. Hewlett-Packard, and others that have recently entered the modular data center market, are betting that containerized data centers will become more mainstream–and there are good reasons why they might be right. Improved energy efficiency and better access for technicians servicing components in modular designs are catching the attention of companies that once gave modular data centers no more than a passing glance.
But increased interest in modular data centers is also being driven by increased data center capacity. A recent Uptime Institute survey showed that 36% of data centers will run out of space or cooling capacity in the next year. Unfortunately, data center facilities have proven over the years to be largely incapable of keeping up with changing technology and growing computing needs. Higher densities are stressing the cooling infrastructure of many data centers, and there’s no guarantee that improvements made today will be enough to support future needs. By the time a state-of-the-art data center is designed, built and brought online, it will likely already have fallen behind the rapid pace of changing technology and design standards. This makes it more difficult to justify many millions of dollars for a new data center build, especially when manufacturers of containerized data centers claim their products are more energy efficient than a custom-built facility. A modular data center can add capacity to an existing environment in a matter of weeks, instead of the several years it would take to design or build an addition or new facility.
Today, many companies’ views on containerized data centers can be compared to public school officials’ perceptions on modular classrooms–shortsighted stopgap measures that waste money when compared to new builds. The difference is that the basic needs of students will remain virtually unchanged for the next 10 years, while the cooling and power needs of servers will likely be much different a decade from now.
However, a containerized approach may not be right for everyone. Even though a containerized approach is easily portable, data center owners can’t just drop a container in a vacant parking lot and expect it will meet their needs and remain secure. The issue of support can also be a problem for some companies where the computing platform offered by the modular container’s vendor is unfamiliar to IT staff.
In the near future, companies such as HP can’t expect to solve all problems with containerized data centers or change the industry perception that they are short-term solutions, but they can try to soften the prejudice against them. Branding a container as being energy efficient gives it appeal that many traditional data centers don’t have. As more companies look for solutions to their growing demands, the energy efficiency and simplicity of containerized data centers will look more appealing. The uncomfortable truth is that we don’t know what the needs of a data center will be 10-20 years from now, which makes the flexibility and scalability of containers an attractive options to many companies. When a modular data center is in need of a refresh, it can simply be replaced at the end of the lease.
Already, some industries are beginning to look favorably at containers. Internet-based companies that sometimes see explosive growth in computing needs can turn to modular data centers to keep up with rapidly changing capacity needs. Companies like Amazon are using containers to support cloud computing platforms.
Modular design will undoubtedly have a place in future data centers. The question is whether future developments in containerized design and technology will emerge to address the real and perceived disadvantages that are holding it back today.
Industry experts long ago predicated the demise of raised floor cooling. Today, there are viable cooling alternatives, but raised floor cooling continues to keep its hold in the data center. Just as we watched skeptically as some have unsuccessfully tried to prophesize the end of the world, we’re still waiting to see raised floors go out of style.
To be fair, data center experts who suggested raised floors would not be the cooling solution of tomorrow had much better information to back up their prediction than the man on the street corner waving an apocalypse sign. Increasing computing needs, denser racks and an increased focus on energy efficiency all seemed to signal the end of raised flooring.
The problem with raised floors is that directing the cool air beneath a raised floor isn’t always enough to meet the cooling demands of many current dense server styles. Opening the floor simply reduces the pressure of cooled air, and adding more cooling may not be a desirable (or cost-effective) answer. As point cooling and other containment tactics gain acceptance, raised floor cooling continues to be relevant in the data center.
There are now simple solutions for many of the inherent problems with raised floor cooling. Directional grates can angle chilled air at equipment to improve cooling efficiency. One complaint many data center managers cite with raised floor cooling is the inability to adjust cooling needs to changing power use and hotspots. It is simply impractical to add or move vented tiles any time cooling needs change in the data center. However, there are products that attempt to address dynamic power use and hotspots, which were once the downfall of raised floor cooling. Electronically controlled dampers, such as Tate Access Floors Inc.’s SmartAire, can limit the movement of chilled air based on inlet air temperatures to make sure the chilled air isn’t “wasted” on equipment that doesn’t need it. Although they aren’t the ideal solution, fans that can throttle up based on changing needs can help cool hotspots. When implemented correctly, these solutions can go a long way toward improving energy efficiency, which has been seen as one of the chief drawbacks of raised floor cooling.
In most cases, it’s more important to pay attention to what is happening to chilled air between the floor and ceiling. Improvements to raised floor cooling infrastructure show that administrators should spend less time looking at the type of floor they use and more time considering blanking panels and addressing overlooked problems with containment solutions.
Raised flooring isn’t the perfect cooling solution, but it certainly has a place in the modern data center. With more tools than ever allowing administrators to make the most out of the infrastructure they have, raised floors could be around for longer than anyone expects.
IT professionals know that unplanned data center downtime is expensive, with the true costs associated with downtime often far exceeding the price of replacing faulty equipment. The time and effort spent by staff to remediate the problem is often difficult to calculate. Worse yet, extended downtime can hurt a company’s reputation and lead to lost business opportunities with financial impacts that are nearly impossible to quantify. While the cost of downtime will vary by the severity of the event, and even with the type of business experiencing an outage, a study on understanding the costs of data center downtime by Emerson Network Power and the Ponemon Institute does its best to give IT professionals and corporate executives a peek into the true costs associated with data center downtime.
The study found that the average data center downtime event costs $505,500, with the average incident lasting 90 minutes. That number is staggering. In fact, in the heat of an outage, it’s probably best not to spend too much time dwelling on the fact that every minute the data center remains down, a company is effectively losing $5,600. The study, which was published earlier this year, took statistics from 41 U.S. data centers in a range of industries, including financial institutions, healthcare companies and colocation providers.
The survey reinforces what many IT pros likely already knew – that the majority of downtime costs don’t come from simply replacing equipment. About 62% of downtime costs reported in the study were attributed to indirect sources, such as reduced end-user productivity and lost business opportunities.
Uninterruptable power supply (UPS) system failure was the leading root cause of downtime, accounting for 29% of the outages recorded in the study. An additional 20% of the outages were related to inadequate cooling systems. Were these IT departments careless in building redundant power systems? Did they ignore the cooling capacity of their facility? Or, were they challenged by growing computing needs while also being constrained by tightening IT budgets?
It is easy to propose cuts to the utilities line of a large IT budget. It is a far different matter to follow through on those budget reductions without adversely affecting downtime prevention and preparedness. A portion of the survey that gauged employees’ thoughts on downtime preparedness said it best. While 75% of senior-level employees felt their companies’ senior management fully supports efforts to prevent and manage unplanned outages, only 31% of supervisor-level employees agreed.
So you want to be a CTO or CIO someday? Have you ever wondered what it takes to climb that IT career ladder successfully? Maybe I have an answer for you.
I recently listened to a panel of CEOs talk about the work force of the future, and among the various bits of wisdom (or just plain old wishful thinking), one panelist remarked that the mantra of a successful CTO is “soft is hard, hard is easy.”
No, this isn’t some arcane riddle that you can waste time trying to figure out between provisioning some more LUNs or patching another batch of servers. It means that successful, upward-moving IT professionals need to master some aspects of the organization that just aren’t taught in any IT curriculum.
There’s no question that an IT career takes knowledge – a LOT of knowledge. Schooling only gets your foot in the door, and the learning never stops as new technologies and products are assimilated into the business. Then there’s the full schedule of challenging projects that wedge an IT pro firmly between a tight budget and a tight timeline. It can seem like you’re caught between the devil and the deep blue sea.
But in the overall scheme of things, answers to all of those technological challenges are well within reach. There are tangible solutions to all of the hardware and application problems that you face as a technician, an administrator or a manager. It’s hard, but it’s also the easiest part of your career — hard is easy.
You see, it’s the array of other subtle “softer” challenges that can stunt your climb up the corporate ladder. Success usually comes down to a mastery of people, processes and politics.
Managing people can be more demanding than any new technology deployment. This is particularly true when it comes to managing today’s younger workers — a demographic whose proclivity for learning is matched only by their fierce disdain for traditional management structures. Identifying, developing and retaining those quality employees are not simple tricks.
Processes play an enormous role in business operations, and the ability to develop and refine processes while maintaining the support of important stakeholders within the organization can make or break a business.
Of course, the endless struggles and agendas of corporate politics remain a harsh reality – you’re not the only one trying to make it to the top.
Skills with people, processes and politics are all “soft skills,” often existing in tandem with professional capital, like your reputation and your credibility. These are also the most difficult skills to possess for IT folks that are noted for their organized, systematic and logical minds — soft is hard.
With economic conditions slowly improving, and companies looking to increase their investment in technology, IT professionals may soon see more opportunities for advancement. If you have your eye on a corner office, take stock of your skillset and remember that it might not be the hard stuff that’s holding you back. Soft is hard, hard is easy. *
I always welcome a fresh perspective, especially when it comes from professionals that know more about a topic than I do. But sometimes a new perspective can be challenging, and it can raise uncomfortable questions that might be painful to consider.
So to get my latest dose of perspective, I spent my Wednesday in Cambridge attending the MIT Sloan CIO Symposium, listening as panels of CEOs shared their thoughts and visions of technology with the CTOs, CIOs and other technology professionals in attendance.
Many of the discussions carried common themes, often touching on migration to the cloud and the shifts needed to embrace a more mobile (in fact a more global) workforce. IT figured prominently in those discussions, and CEOs extolled the virtues of agility, efficiency, cost savings and service quality improvements that they expected. Ultimately, the perception of IT must be steered away from implementing systems and supporting applications. Instead, IT should focus on providing business services to employees and users faster, easier and cheaper.
At first blush, it makes a lot of sense. IT won’t serve a business well if it retains its traditional silos and separations. The move to cloud technologies requires a shift in attitudes, along with new IT skillsets and roles, such as “cloud architects.” Cloud migration affects everything from networks to servers to storage to applications to users. And there are also numerous problems with cloud technologies that still need to be overcome, including concerns about security, performance, regulatory compliance and privacy, and ways to manage an unfathomable ocean of unstructured data.
When I started thinking about the long-term implications for IT in the enterprise, I realized that some important questions were not addressed. With all of these changes – now and on the horizon – how can IT and its professionals preserve their relevance in a modern business environment? Can IT keep a place at the table, helping shape and direct the future success of the enterprise, or is IT relegated to a fate as a line item destined for perpetual budget-cutting, along with printing costs and corporate travel expenses?
IT and technology practitioners do have a meaningful role in tomorrow’s enterprise, but it’s not the traditional hardware/software deployment and support paradigm that we see today. Tomorrow’s IT must prove its value to the business by employing metrics. It might be a matter of measuring business growth attributable to IT, gauging improvements in customer/user satisfaction or some other yardstick.
But one of the most important ways that IT can remain relevant is by identifying new technologies that can enhance the business, performing the intensive reviews and due diligence needed to evaluate the suitability of new technology, and then shepherding the organization through the adoption and development of that new technology. Just consider how platforms like netbooks and smartphones are changing the way businesses operate today.
Okay, even the savviest CEO can’t precisely define the role and influence of tomorrow’s IT department. But one thing’s for sure — IT professionals won’t sweat about adding disks to storage arrays or upgrading memory modules. *
Oracle’s shenanigans de-supporting Intel Itanium and by extension HP-UX on Integrity certainly hasn’t earned it any friends, but experts are divided on whether the database giant has gone too far.
According to a recent survey of Oracle customers by Gabriel Consulting Group, 67% said the decision to desupport Itanium changed their opinion of the database giant for the worse, compared with 27% that said their opinion was unchanged or not negatively impacted. In fact, Oracle may be even more unpopular than those numbers suggest, since many of the 27% clarified that they had thought badly of Oracle to begin with.
Oracle has never been popular with its customers, but this time, the Gabriel Consulting Group survey suggests playing hardball with HP might be the last straw.
“It’s hard to say for certain, but my sense is that [Oracle’s] actions have gotten users’ attention, and are making them think,” said Dan Olds, Gabriel Consulting Group principal. “It’s not necessarily the straw that broke the camel’s back, but they’re looking at what’s out there.”
In the software space, the survey found deep pockets of frustration with the company, in particular among Oracle database customers, 39% of which reported they were migrating or actively evaluating other platforms.
Users of Oracle’s operating systems (Solaris, Solaris x64 and Oracle Linux) are even more likely to jump ship: 51% of respondents said they were actively looking at alternatives (25%) or definitely migrating (26%).
But as much animosity as Oracle has generated, it will probably be none the worse for the wear – and will probably come out ahead, said Richard Fichera, principal analyst at Forrester Research.
“These are the actions of a company that thinks they can get away with it,” he said. In fact, Fichera said he wouldn’t be surprised if Oracle raised prices further on competitive platforms, to make the cost of its hardware relatively more attractive.
“This was a tough business move, but everyone in this business is pragmatic,” he added. “It’s tempting to say that everyone is going to punish them, but what is the cost of unraveling from Oracle? Is it worth $500,000 plus added risk? Probably not.”