Data Center Apparatus


May 19, 2011  7:54 AM

Soft is hard, hard is easy

SteveBige01 Stephen Bigelow Profile: SteveBige01

 

So you want to be a CTO or CIO someday? Have you ever wondered what it takes to climb that IT career ladder successfully? Maybe I have an answer for you.

I recently listened to a panel of CEOs talk about the work force of the future, and among the various bits of wisdom (or just plain old wishful thinking), one panelist remarked that the mantra of a successful CTO is “soft is hard, hard is easy.”

No, this isn’t some arcane riddle that you can waste time trying to figure out between provisioning some more LUNs or patching another batch of servers. It means that successful, upward-moving IT professionals need to master some aspects of the organization that just aren’t taught in any IT curriculum.

There’s no question that an IT career takes knowledge – a LOT of knowledge. Schooling only gets your foot in the door, and the learning never stops as new technologies and products are assimilated into the business. Then there’s the full schedule of challenging projects that wedge an IT pro firmly between a tight budget and a tight timeline. It can seem like you’re caught between the devil and the deep blue sea.

But in the overall scheme of things, answers to all of those technological challenges are well within reach. There are tangible solutions to all of the hardware and application problems that you face as a technician, an administrator or a manager. It’s hard, but it’s also the easiest part of your career — hard is easy.

You see, it’s the array of other subtle “softer” challenges that can stunt your climb up the corporate ladder. Success usually comes down to a mastery of people, processes and politics.

Managing people can be more demanding than any new technology deployment. This is particularly true when it comes to managing today’s younger workers — a demographic whose proclivity for learning is matched only by their fierce disdain for traditional management structures. Identifying, developing and retaining those quality employees are not simple tricks.

Processes play an enormous role in business operations, and the ability to develop and refine processes while maintaining the support of important stakeholders within the organization can make or break a business.

Of course, the endless struggles and agendas of corporate politics remain a harsh reality – you’re not the only one trying to make it to the top.

Skills with people, processes and politics are all “soft skills,” often existing in tandem with professional capital, like your reputation and your credibility. These are also the most difficult skills to possess for IT folks that are noted for their organized, systematic and logical minds — soft is hard.

With economic conditions slowly improving, and companies looking to increase their investment in technology, IT professionals may soon see more opportunities for advancement. If you have your eye on a corner office, take stock of your skillset and remember that it might not be the hard stuff that’s holding you back. Soft is hard, hard is easy. *

May 19, 2011  7:48 AM

Questions left unanswered

SteveBige01 Stephen Bigelow Profile: SteveBige01

 I always welcome a fresh perspective, especially when it comes from professionals that know more about a topic than I do. But sometimes a new perspective can be challenging, and it can raise uncomfortable questions that might be painful to consider.

So to get my latest dose of perspective, I spent my Wednesday in Cambridge attending the MIT Sloan CIO Symposium, listening as panels of CEOs shared their thoughts and visions of technology with the CTOs, CIOs and other technology professionals in attendance.

Many of the discussions carried common themes, often touching on migration to the cloud and the shifts needed to embrace a more mobile (in fact a more global) workforce. IT figured prominently in those discussions, and CEOs extolled the virtues of agility, efficiency, cost savings and service quality improvements that they expected. Ultimately, the perception of IT must be steered away from implementing systems and supporting applications. Instead, IT should focus on providing business services to employees and users faster, easier and cheaper.

At first blush, it makes a lot of sense. IT won’t serve a business well if it retains its traditional silos and separations. The move to cloud technologies requires a shift in attitudes, along with new IT skillsets and roles, such as “cloud architects.” Cloud migration affects everything from networks to servers to storage to applications to users. And there are also numerous problems with cloud technologies that still need to be overcome, including concerns about security, performance, regulatory compliance and privacy, and ways to manage an unfathomable ocean of unstructured data.

When I started thinking about the long-term implications for IT in the enterprise, I realized that some important questions were not addressed. With all of these changes – now and on the horizon – how can IT and its professionals preserve their relevance in a modern business environment? Can IT keep a place at the table, helping shape and direct the future success of the enterprise, or is IT relegated to a fate as a line item destined for perpetual budget-cutting, along with printing costs and corporate travel expenses?

IT and technology practitioners do have a meaningful role in tomorrow’s enterprise, but it’s not the traditional hardware/software deployment and support paradigm that we see today. Tomorrow’s IT must prove its value to the business by employing metrics. It might be a matter of measuring business growth attributable to IT, gauging improvements in customer/user satisfaction or some other yardstick.

But one of the most important ways that IT can remain relevant is by identifying new technologies that can enhance the business, performing the intensive reviews and due diligence needed to evaluate the suitability of new technology, and then shepherding the organization through the adoption and development of that new technology. Just consider how platforms like netbooks and smartphones are changing the way businesses operate today.

Okay, even the savviest CEO can’t precisely define the role and influence of tomorrow’s IT department. But one thing’s for sure — IT professionals won’t sweat about adding disks to storage arrays or upgrading memory modules. *


May 5, 2011  4:55 PM

Did Oracle go too far by dropping Itanium?

Alex Barrett Alex Barrett Profile: Alex Barrett

Oracle’s shenanigans de-supporting Intel Itanium and by extension HP-UX on Integrity certainly hasn’t earned it any friends, but experts are divided on whether the database giant has gone too far.

According to a recent survey of Oracle customers by Gabriel Consulting Group, 67% said the decision to desupport Itanium changed their opinion of the database giant for the worse, compared with 27% that said their opinion was unchanged or not negatively impacted. In fact, Oracle may be even more unpopular than those numbers suggest, since many of the 27% clarified that they had thought badly of Oracle to begin with.

Oracle has never been popular with its customers, but this time, the Gabriel Consulting Group survey suggests playing hardball with HP might be the last straw.

“It’s hard to say for certain, but my sense is that [Oracle’s] actions have gotten users’ attention, and are making them think,” said Dan Olds, Gabriel Consulting Group principal. “It’s not necessarily the straw that broke the camel’s back, but they’re looking at what’s out there.”

In the software space, the survey found deep pockets of frustration with the company, in particular among Oracle database customers, 39% of which reported they were migrating or actively evaluating other platforms.

Users of Oracle’s operating systems (Solaris, Solaris x64 and Oracle Linux) are even more likely to jump ship: 51% of respondents said they were actively looking at alternatives (25%) or definitely migrating (26%).

But as much animosity as Oracle has generated, it will probably be none the worse for the wear – and will probably come out ahead, said Richard Fichera, principal analyst at Forrester Research.

“These are the actions of a company that thinks they can get away with it,” he said. In fact, Fichera said he wouldn’t be surprised if Oracle raised prices further on competitive platforms, to make the cost of its hardware relatively more attractive.

“This was a tough business move, but everyone in this business is pragmatic,” he added. “It’s tempting to say that everyone is going to punish them, but what is the cost of unraveling from Oracle? Is it worth $500,000 plus added risk? Probably not.”


April 6, 2011  10:11 AM

Dell experiments with mushrooms

Alex Barrett Alex Barrett Profile: Alex Barrett

We always knew that Michael Dell was a fungi, but caps off to him and his crew for adopting new eco-friendly packaging based on….mushrooms.

As part of Dell’s sustainable packaging strategy, Dell will start shipping some of its equipment in mushroom cushioning, wrote Oliver Campbell, Dell procurement director on the Direct2Dell blog.

Developed by the the National Science Foundation, the US EPA, and the USDA, mushroom cushioning is a unique packaging technology, Dell explained.

With it, “waste product like cotton hulls are placed in a mold which is then inoculated with mushroom spawn. Our cushions take 5 – 10 days to grow as the spawn, which become the root structure – or by the scientific name, mycelium – of the mushroom. All the energy needed to form the cushion is supplied by the carbohydrates and sugars in the ag waste. There’s no need for energy based on carbon or nuclear fuels.”

The new mushroom-based packaging is but Dell’s latest advance in earth-friendly packaging. The company already uses bamboo packaging, in which it ships select laptops and tablets. But mushroom-based packaging is better suited to heavier products like servers and desktops, the company said.

What will be the first product to ship swaddled in fungus? The PowerEdge R710 server. The company claimed it has tested the packaging extensively to ensure that it can ensure safe shipments, “and it passed like a champ.”


March 25, 2011  11:53 AM

Big Data, big data center changes?

Alex Barrett Alex Barrett Profile: Alex Barrett

Big Data will transform your data center. Then again, maybe it won’t.

I went to GigaOm’s Structure Big Data show in New York City this week to see what’s new in the world of data analysis, and there I found that forward-looking data analysts are using open source software and commodity scale-out x86 servers to solve problems usually reserved for Big Iron.

Indeed, not only was I struck by the ubiquity of open source, x86, scale-out clusters, and the like, but also the marked absense of everything else: Oracle Exadata, the big Unix vendors, etc. At least as far as the avant-garde is concerned, those are the tools of the past.

More to the point, some of the presenters managed to make having any servers at all seem somewhat quaint. In a talk enticingly titled ‘Supercomputing on a minimum wage,” Pete Warden, founder of OpenHeatMap and a former Apple programmer, told a crowd of well-heeled venture capitalists how he crawled and derived meaningful analyses of 210 million public Facebook profiles (for which he was subsequently sued) for about $100 in Amazon EC2 machine time.

“Did you know you can hire 100 servers from Amazon for $10 an hour?” he told the crowd.

Let me repeat that; A 100-node supercomputer for $10/hour.

What does this mean to data center managers? On the one hand, it means that data analysts and business intelligence folks are definitely impacting the architecture and design of the future data centers. But on the other, unless you work for a large cloud provider, the likelihood that you’ll have to design and manage those systems is relatively small.


March 24, 2011  3:24 PM

Test your backups before it’s too late

NicoleH Nicole Harding Profile: NicoleH

 

Check out this advice from expert Brien M. Posey on the importance of server backup.

 

 

 

Having reliable backups is critical to your organization’s health regardless of what type of fault-tolerant infrastructure you have in place. Fault-tolerant components such as failover clusters and redundant network hardware only protect you against hardware failure. They do nothing to guard against data loss. For example, if a user accidentally deletes a file, then your fault-tolerant hardware won’t do anything to help you get the file back.

 

Of course, just backing up your servers isn’t enough. You have to test the backups to make sure they work. I once saw a major organization lose all of the data from one of its Microsoft Exchange Servers because it had been backing up the server incorrectly and didn’t know it. Had the company tested its backups, it would have found the problem before it was too late to do anything about it.

 

It’s one thing to say that backups need to be tested, but quite another to actually test them. Create an isolated lab environment that you can use for backup testing. You don’t need to worry about using high-end servers or implementing any kind of redundancy. You just need to be able to restore your backups to some virtual machines to make sure they work.

 

At a minimum, your lab will most likely require a domain controller, DNS and DHCP server that mimic those used in your production environment. That’s because so many Tier 1 applications are dependent on the Active Directory. For example, suppose you want to verify the integrity of your Exchange Server backups. You couldn’t just restore Exchange to an isolated server and expect it to work. The mailbox databases would never mount because the required Active Directory infrastructure would not be present.

When I test my own backups, I like to start out by restoring the most recent backup of my PDC emulator which is also a DNS serverto a virtual server. This provides me with the infrastructure required to test other backups, but it also gives me an easy way of verifying the integrity of a domain controller backup.


February 22, 2011  3:44 PM

IBM “Watson” for the masses on tap

Alex Barrett Alex Barrett Profile: Alex Barrett

Fresh off its win last week on Jeopardy, a “Watson”-style analytics cluster is within reach of the average enterprise, said Ian Jarman, IBM manager for Power systems, told SearchDataCenter.com.

“With Watson, one of the design goals was to create a system that could be readily used by other applications,” Jarman said, for instance the medical and legal communities. Thus, even though Watson runs sophisticated analytics software, it is based on hardware that is familiar to many data center managers, namely, 90 of IBM’s own Power 750 servers, each with 32 POWER7 cores, each with between 128 and 512GB of RAM. Those are the same Power systems frequently found in enterprise data centers running traditional OLTP applications like ERP systems.

“Some people have labeled Watson a supercomputer, but it really is not,” Jarman said. Rather, “each individual node is very efficient.”

That’s in contrast to previous IBM projects like Deep Blue, the 1997 supercomputer that beat chess master Gary Kasparov. That system was also based on POWER, but it also contained hundreds of specially designed chess chips. With its highly customized hardware configuration, “it was a one-off,” Jarman said.

But while Watson is based on “commodity” parts, Jarman did emphasize the Power system’s prowess over x86-based clusters. While Watson could run on x86-based systems, the Watson design team estimated that it would require three to four times as many nodes to achieve similar performance if it had designed Watson around Intel-based systems. “The fact is, it really was the case that the P7 and its massive on-chip memory bandwidth was critical to its performance,” he said.


February 3, 2011  3:46 PM

A leg-up on raised floors?

SteveBige01 Stephen Bigelow Profile: SteveBige01

Are raised floors going the way of the Dodo? It just might happen. Data center technologies are continuing to evolve, and the traditional benefits of raised floors are quickly being overshadowed by newer and more efficient cooling techniques.

 

Sure, raised floors have been around for a long time – an elevated platform creates an enclosed plenum, providing important space for electrical and IT infrastructure, along with mechanical support for resources like chilled water and CRAC ductwork that cools the systems sitting on the actual floor panels.

 

However, raised floors are far from perfect. The plenum space is filthy, and working within tight, confined spaces can be a serious challenge for the claustrophobic among us. Any structural problems like loose or poorly installed tiles can collapse and damage equipment and cause injury to personnel. But those nuisances pale when compared to cooling limitations.

 

“As IT execs cram denser, more power servers into a 42U space, it becomes increasingly difficult to cool systems with under-floor forced air, even with hot-aisle/cold-aisle design best practices,” said Matt Stansberry, director of content and publications at Uptime Institute. Stansberry explains that CRAC vendors have turned to more energy-efficient cooling such as in-row and in-rack techniques. Moving the cooling closer to the heat source is more effective than trying to cool the entire room and everything in it. Ultimately, the value of raised flooring is increasingly in question.

 

Seriously, Stansberry certainly isn’t alone. According to TechTarget’s 2010 Data Center Decisions survey, 59% of IT respondents use raised flooring in their current data center, but only 43% expect to use raised floors in a future data center. Slabbed floors are also falling into disuse, with 33% of respondents using slabbed floors in the current data center, but only 19% planning slabbed floors for a future data center. In fact 38% of IT professionals don’t know what kind of flooring they will use in the future.

 

 “The best way for an owner/operator to make a decision is to bring the IT department into the design process to explain hardware deployment plans and roadmaps,” Stansberry said, but what is your opinion of raised floors and data center architectures overall? How do you select the flooring approach that works best for you? What are the tradeoffs and limitations that matter for you? S-


January 26, 2011  4:12 PM

Self-service in IT workload management: Is user control dangerous?

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

The concept of IT workload automation management is nothing brand new – admins have been able to adjust and maneuver computing resources to optimize workloads for a while.  But self-service brought into the equation – whereby a non-IT business user can make workload processing changes for business transactions – is a whole new beast.

 

Solutions like BMC Software Inc.’s Control-M Self Service aim to do just that – empower the business user with workload management powers to bypass the time that it would take IT departments to finish workload requests. But is this user empowerment dangerous?

 

“The question now becomes, ‘How much power do we want to give the end user and how much automation do we really want?’” said Bill Kleyman, director of technology at World Wide Fittings Inc., a manufacturer of steel hydraulic tube and pipe fittings. “There will be instances where giving a user control over a given workload may be in the best interest of the company.”

 

Examples of this include financial applications running on workloads that may require user attention several times a day between both financial and IT departments, according to Kleyman. With a self-service solution, the end user can modify the workloads his or herself, minimizing waiting periods from IT specialists’ work and allowing small changes for these applications to be made at one’s convenience – much like how a kiosk at a bank works — and saving a company much money in the process.


Even here, though, monitoring and some control on the side of IT seems to be the secret sauce.

 

“Proper security settings need to be in place so that the user doesn’t ‘accidently’ break something,” continued Kleyman. “On that note, empowering the end-user is a good thing, but to a degree. Since a typical PC user doesn’t fully comprehend the intricate process of allocating computing resources, giving them the ability will have to be a per-case basis.”

 

If not, opening permissions for say, a marketing person, to make changes to finance workloads could spell disaster, according to Kleyman. In addition, policy-based restrictions need to be in place. With an incorrect workload authorization, the door can be opened for users to accidently click an incorrect button or perform an irreversible action. BMC’s offering has restrictions that allow users to perform actions for only which they are authorized. And it’s these controls – a safeguard of sorts for the IT side – that don’t worry one client about using self-service workload management.

 

ConAgra Foods Inc. , a BMC Control-M customer since 2005, has plans to implement the product’s self-service version in the next 12-18 months.

 

John Murphy, IT manager with ConAgra Foods, said that they’d limit creating new jobs to in-house, and only allow the initiation of pre-existing jobs to be maintained. In addition to control via personal mandates, Murphy said that the product itself touts control by being role-based — ConAgra would have the ability to tailor the job menu specific to the individual business user, potentially limiting any “fallout” that could occur in a mistake.

 

“We like the control to be able to say, ‘these options are low-impact,’” said Murphy, who also noted that continuous education of the business users would also be vital, even with aids within Control-M Self Service that direct non-IT users to the linkages between jobs and provide a visual “map.”

 

Do you feel comfortable with non-IT users being left to workload management tasks, or do you think that with the controls mentioned, it could work? Sound off in the comments.

 


January 25, 2011  3:59 PM

Consolidation stressing you out?

SteveBige01 Stephen Bigelow Profile: SteveBige01

Server consolidation is a great idea on paper – you use virtualization software to host multiple virtual machines on the same physical host, so you improve hardware utilization and buy fewer servers, right?

 

Well…not exactly.

 

The problem is that many organizations just vacuum up the computing resources that consolidation is supposed to save. Everybody is clamoring for more resources. Test and dev needs to prove out some new builds, accounting wants more resources for the year-end tax season, creative needs a new version of AutoCad, and there are 20 new web servers that have to come online for the holidays. Just take a few minutes and provision the new servers; no problem.

 

But wait…before you know it, the servers are jammed with more VMs than anyone ever expected, and they never seem to go away. Sure, organizations can shell out cash for more servers, but the nasty little cycle just starts all over again. The space savings, the power savings, the capex savings; it all goes away. And it gets worse. All those VMs need management time and storage/backup/DR protection. I swear it’s enough to make me move to Nebraska and take up chicken farming.

 

So how do you handle this dilemma? Is it just a matter of putting policies and procedures into place to justify new VMs? Do you use VM lifecycle management tools to handle processes automatically? Or do you just fork over the cash for more servers and hope for the best? What tactics are working for you, and which tactics are not?

 

S-


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: