Data Center Apparatus


March 25, 2011  11:53 AM

Big Data, big data center changes?

Alex Barrett Alex Barrett Profile: Alex Barrett

Big Data will transform your data center. Then again, maybe it won’t.

I went to GigaOm’s Structure Big Data show in New York City this week to see what’s new in the world of data analysis, and there I found that forward-looking data analysts are using open source software and commodity scale-out x86 servers to solve problems usually reserved for Big Iron.

Indeed, not only was I struck by the ubiquity of open source, x86, scale-out clusters, and the like, but also the marked absense of everything else: Oracle Exadata, the big Unix vendors, etc. At least as far as the avant-garde is concerned, those are the tools of the past.

More to the point, some of the presenters managed to make having any servers at all seem somewhat quaint. In a talk enticingly titled ‘Supercomputing on a minimum wage,” Pete Warden, founder of OpenHeatMap and a former Apple programmer, told a crowd of well-heeled venture capitalists how he crawled and derived meaningful analyses of 210 million public Facebook profiles (for which he was subsequently sued) for about $100 in Amazon EC2 machine time.

“Did you know you can hire 100 servers from Amazon for $10 an hour?” he told the crowd.

Let me repeat that; A 100-node supercomputer for $10/hour.

What does this mean to data center managers? On the one hand, it means that data analysts and business intelligence folks are definitely impacting the architecture and design of the future data centers. But on the other, unless you work for a large cloud provider, the likelihood that you’ll have to design and manage those systems is relatively small.

March 24, 2011  3:24 PM

Test your backups before it’s too late

NicoleH Nicole Harding Profile: NicoleH

 

Check out this advice from expert Brien M. Posey on the importance of server backup.

 

 

 

Having reliable backups is critical to your organization’s health regardless of what type of fault-tolerant infrastructure you have in place. Fault-tolerant components such as failover clusters and redundant network hardware only protect you against hardware failure. They do nothing to guard against data loss. For example, if a user accidentally deletes a file, then your fault-tolerant hardware won’t do anything to help you get the file back.

 

Of course, just backing up your servers isn’t enough. You have to test the backups to make sure they work. I once saw a major organization lose all of the data from one of its Microsoft Exchange Servers because it had been backing up the server incorrectly and didn’t know it. Had the company tested its backups, it would have found the problem before it was too late to do anything about it.

 

It’s one thing to say that backups need to be tested, but quite another to actually test them. Create an isolated lab environment that you can use for backup testing. You don’t need to worry about using high-end servers or implementing any kind of redundancy. You just need to be able to restore your backups to some virtual machines to make sure they work.

 

At a minimum, your lab will most likely require a domain controller, DNS and DHCP server that mimic those used in your production environment. That’s because so many Tier 1 applications are dependent on the Active Directory. For example, suppose you want to verify the integrity of your Exchange Server backups. You couldn’t just restore Exchange to an isolated server and expect it to work. The mailbox databases would never mount because the required Active Directory infrastructure would not be present.

When I test my own backups, I like to start out by restoring the most recent backup of my PDC emulator which is also a DNS serverto a virtual server. This provides me with the infrastructure required to test other backups, but it also gives me an easy way of verifying the integrity of a domain controller backup.


February 22, 2011  3:44 PM

IBM “Watson” for the masses on tap

Alex Barrett Alex Barrett Profile: Alex Barrett

Fresh off its win last week on Jeopardy, a “Watson”-style analytics cluster is within reach of the average enterprise, said Ian Jarman, IBM manager for Power systems, told SearchDataCenter.com.

“With Watson, one of the design goals was to create a system that could be readily used by other applications,” Jarman said, for instance the medical and legal communities. Thus, even though Watson runs sophisticated analytics software, it is based on hardware that is familiar to many data center managers, namely, 90 of IBM’s own Power 750 servers, each with 32 POWER7 cores, each with between 128 and 512GB of RAM. Those are the same Power systems frequently found in enterprise data centers running traditional OLTP applications like ERP systems.

“Some people have labeled Watson a supercomputer, but it really is not,” Jarman said. Rather, “each individual node is very efficient.”

That’s in contrast to previous IBM projects like Deep Blue, the 1997 supercomputer that beat chess master Gary Kasparov. That system was also based on POWER, but it also contained hundreds of specially designed chess chips. With its highly customized hardware configuration, “it was a one-off,” Jarman said.

But while Watson is based on “commodity” parts, Jarman did emphasize the Power system’s prowess over x86-based clusters. While Watson could run on x86-based systems, the Watson design team estimated that it would require three to four times as many nodes to achieve similar performance if it had designed Watson around Intel-based systems. “The fact is, it really was the case that the P7 and its massive on-chip memory bandwidth was critical to its performance,” he said.


February 3, 2011  3:46 PM

A leg-up on raised floors?

Stephen Bigelow Stephen Bigelow Profile: Stephen Bigelow

Are raised floors going the way of the Dodo? It just might happen. Data center technologies are continuing to evolve, and the traditional benefits of raised floors are quickly being overshadowed by newer and more efficient cooling techniques.

Sure, raised floors have been around for a long time – an elevated platform creates an enclosed plenum, providing important space for electrical and IT infrastructure, along with mechanical support for resources like chilled water and CRAC ductwork that cools the systems sitting on the actual floor panels.

However, raised floors are far from perfect. The plenum space is filthy, and working within tight, confined spaces can be a serious challenge for the claustrophobic among us. Any structural problems like loose or poorly installed tiles can collapse and damage equipment and cause injury to personnel. But those nuisances pale when compared to cooling limitations.

“As IT execs cram denser, more power servers into a 42U space, it becomes increasingly difficult to cool systems with under-floor forced air, even with hot-aisle/cold-aisle design best practices,” said Matt Stansberry, director of content and publications at Uptime Institute. Stansberry explains that CRAC vendors have turned to more energy-efficient cooling such as in-row and in-rack techniques. Moving the cooling closer to the heat source is more effective than trying to cool the entire room and everything in it. Ultimately, the value of raised flooring is increasingly in question.

Seriously, Stansberry certainly isn’t alone. According to TechTarget’s 2010 Data Center Decisions survey, 59% of IT respondents use raised flooring in their current data center, but only 43% expect to use raised floors in a future data center. Slabbed floors are also falling into disuse, with 33% of respondents using slabbed floors in the current data center, but only 19% planning slabbed floors for a future data center. In fact 38% of IT professionals don’t know what kind of flooring they will use in the future.

“The best way for an owner/operator to make a decision is to bring the IT department into the design process to explain hardware deployment plans and roadmaps,” Stansberry said, but what is your opinion of raised floors and data center architectures overall? How do you select the flooring approach that works best for you? What are the tradeoffs and limitations that matter for you? S-


January 26, 2011  4:12 PM

Self-service in IT workload management: Is user control dangerous?

Ryan Arsenault Ryan Arsenault Profile: Ryan Arsenault

The concept of IT workload automation management is nothing brand new – admins have been able to adjust and maneuver computing resources to optimize workloads for a while.  But self-service brought into the equation – whereby a non-IT business user can make workload processing changes for business transactions – is a whole new beast.

 

Solutions like BMC Software Inc.’s Control-M Self Service aim to do just that – empower the business user with workload management powers to bypass the time that it would take IT departments to finish workload requests. But is this user empowerment dangerous?

 

“The question now becomes, ‘How much power do we want to give the end user and how much automation do we really want?’” said Bill Kleyman, director of technology at World Wide Fittings Inc., a manufacturer of steel hydraulic tube and pipe fittings. “There will be instances where giving a user control over a given workload may be in the best interest of the company.”

 

Examples of this include financial applications running on workloads that may require user attention several times a day between both financial and IT departments, according to Kleyman. With a self-service solution, the end user can modify the workloads his or herself, minimizing waiting periods from IT specialists’ work and allowing small changes for these applications to be made at one’s convenience – much like how a kiosk at a bank works — and saving a company much money in the process.


Even here, though, monitoring and some control on the side of IT seems to be the secret sauce.

 

“Proper security settings need to be in place so that the user doesn’t ‘accidently’ break something,” continued Kleyman. “On that note, empowering the end-user is a good thing, but to a degree. Since a typical PC user doesn’t fully comprehend the intricate process of allocating computing resources, giving them the ability will have to be a per-case basis.”

 

If not, opening permissions for say, a marketing person, to make changes to finance workloads could spell disaster, according to Kleyman. In addition, policy-based restrictions need to be in place. With an incorrect workload authorization, the door can be opened for users to accidently click an incorrect button or perform an irreversible action. BMC’s offering has restrictions that allow users to perform actions for only which they are authorized. And it’s these controls – a safeguard of sorts for the IT side – that don’t worry one client about using self-service workload management.

 

ConAgra Foods Inc. , a BMC Control-M customer since 2005, has plans to implement the product’s self-service version in the next 12-18 months.

 

John Murphy, IT manager with ConAgra Foods, said that they’d limit creating new jobs to in-house, and only allow the initiation of pre-existing jobs to be maintained. In addition to control via personal mandates, Murphy said that the product itself touts control by being role-based — ConAgra would have the ability to tailor the job menu specific to the individual business user, potentially limiting any “fallout” that could occur in a mistake.

 

“We like the control to be able to say, ‘these options are low-impact,’” said Murphy, who also noted that continuous education of the business users would also be vital, even with aids within Control-M Self Service that direct non-IT users to the linkages between jobs and provide a visual “map.”

 

Do you feel comfortable with non-IT users being left to workload management tasks, or do you think that with the controls mentioned, it could work? Sound off in the comments.

 


January 25, 2011  3:59 PM

Consolidation stressing you out?

Stephen Bigelow Stephen Bigelow Profile: Stephen Bigelow

Server consolidation is a great idea on paper – you use virtualization software to host multiple virtual machines on the same physical host, so you improve hardware utilization and buy fewer servers, right?

 

Well…not exactly.

 

The problem is that many organizations just vacuum up the computing resources that consolidation is supposed to save. Everybody is clamoring for more resources. Test and dev needs to prove out some new builds, accounting wants more resources for the year-end tax season, creative needs a new version of AutoCad, and there are 20 new web servers that have to come online for the holidays. Just take a few minutes and provision the new servers; no problem.

 

But wait…before you know it, the servers are jammed with more VMs than anyone ever expected, and they never seem to go away. Sure, organizations can shell out cash for more servers, but the nasty little cycle just starts all over again. The space savings, the power savings, the capex savings; it all goes away. And it gets worse. All those VMs need management time and storage/backup/DR protection. I swear it’s enough to make me move to Nebraska and take up chicken farming.

 

So how do you handle this dilemma? Is it just a matter of putting policies and procedures into place to justify new VMs? Do you use VM lifecycle management tools to handle processes automatically? Or do you just fork over the cash for more servers and hope for the best? What tactics are working for you, and which tactics are not?

 

S-


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: