Server Farming

ACRHIVED. Please visit our new blog at: http://itknowledgeexchange.techtarget.com/data-center/


December 10, 2010  10:22 PM

Gartner: rookie virtualization mistakes to avoid



Posted by: Alex Barrett
Gartner Data Center Conference, mistakes, pitfalls, problems, Virtualization, VMware

What are the most common mistakes IT managers make when doing a server virtualization project? There are many, and they’re easily avoidable, said a gaggle of Gartner analysts at the Gartner Data Center Conference in Las Vegas this week.

Most virtualization goofs center around lack of planning and forethought, they said.

For example, on the storage front, perhaps the biggest mistake is to start a virtualization project without already having a SAN or shared storage in place, said Robert Passmore, a Gartner research vice president. “They get 75% of the way through a virtualization project and they realize they need a SAN, but by that point they don’t have the budget,” he said. “It’s a real disaster.”

Similarly, most virtualization newbies don’t anticipate the increase in demand that virtualization will place on their systems, nor the speed at which growth will occur.

“Virtualization introduces speed, and most processes aren’t ready for speed,” said Tom Bittman, another Gartner research VP. And because virtualization tends to remove artificial obstacles to provisioning new workloads, demand for services tends to double in highly virtualized environments, he said. To avoid being overwhelmed, newcomers to virtualization should consider implementing lifecycle management, as well as chargeback or showback.

Other rookie mistakes involve failing to treat the virtualization layer with enough gravitas.

In their enthusiasm for VMware, many IT managers make the mistake of moving too quickly to the latest and greatest release of the platform, said Passmore, oftentimes before ecosystem such as backup and management software are available.

And security of the virtualization layer is often overlooked, added Neil MacDonald, vice president and distinguished analyst. “These issues are overlooked because people say ‘nothing’s different,’ when really, a lot is different,” he said.

To avoid security problems, MacDonald suggested IT managers elevate virtualization to the same layer as the operating system. “Treat it like an OS,” he said, with all the attending hardening, patch management and compliance processes.

Finally on the desktop front, an overarching mistake is to fail to fundamentally rethink desktop support, said Mark Margevicius, vice president and research director. When it comes to the desktop, most organizations are built around a distributed device, he said, but “everything changes when you virtualize the desktop,” for instance, support processes, refresh rates, and budget allocation, to name a few. Without thinking desktop virtualization through up front, “you almost have more problems than the problems you solved with it.”

December 10, 2010  2:31 PM

Gartner: Converged infrastructure push will intensify, hide your wallet



Posted by: Matt Stansberry
converged data center infrastructure, DataCenter

This week at the Gartner Data Center conference in Las Vegas, analysts warned IT managers to avoid inadvertently backing into any vendor’s integrated stack.

For converged infrastructure platforms, “it’s too early, there are too many unknowns, and it will not serve you well over the long haul,” said Gartner VP David Capuccio in the Monday morning keynote.

Gartner’s Jeff Hewitt dubbed the converged infrastructure players The Big Five: Cisco, Dell, HP, IBM and Oracle. These vendors already have a piece of your data center budget, and are looking for more, acquiring companies or partnering to create an integrated stack.

Some IT managers are calling these converged systems the new mainframes: servers, storage, networking, operating systems, hypervisor, management tools and middleware all pre-integrated, and each layer of the stack is optimized with the vendor’s secret sauce.

Examples include the Cisco UCS, Oracle’s Exadata and Exalogic machines, HP’s Bladesystem Matrix and VCE’s vBlock.

These machines can put up impressive performance numbers, and make vendor management simpler, but analysts and IT managers are wary of vendor lock-in.

“When you reduce your number of suppliers, you inherently increase risk and make it harder to switch providers, and you reduce your negotiation strength,” Hewitt said.

Gartner asked attendees: Which of the Big Five are you likely to buy from in 2011 that you’re not buying from today?

Cisco dominated with 33%, followed by HP 18%, IBM 17%, Dell 9%, and Oracle with 8%. Also, 15% said they didn’t plan to buy anything new from the big five.

“Cisco offering its network customers UCS is as easy of McDonalds offering fries with hamburger,” Hewitt said.


December 9, 2010  6:30 PM

Data center cost reduction strategies from Gartner



Posted by: Matt Stansberry
cost analysis, DataCenter, Gartner

This week at the Gartner Data Center conference, at a session on reducing data center costs, IT managers filled both the primary and overflow rooms. Gartner analysts John Enck and Raymond Paquet laid out some tips for shrinking data center spending.

But you can’t manage what you don’t measure. According to an audience poll, about 20% of the audience had no IT cost accounting in place at all, and 32% only tracked costs on physical assets. Over half the attendees were basically flying blind on their IT budgets.

Gartner said data center managers need to start automating processes. Around 38% of IT costs are personnel. If you want to cut costs, you need to cut people, according to Gartner.

Data center managers should also take a hard look at IT asset management to see what costs can be eliminated. “This is not one of those tools you will spend money on and don’t see ROI, you should see cost savings,” Enck said.

Around 80% of the audience had some kind of asset management tool in place. The rest had no systems in place or did manual inventory.

Gartner also said IT shops should learn a few things from the cloud computing providers, organizations running IT on such small margins that they need to be efficient with cash as possible.

-Storage will grow 800% in five years, so invest in the cheapest you can get away with.
-Buy the cheapest x86 machines you can find, exclusively rack based, not blades. 1U skinless, x86 servers, last year’s model where the cost is stripped out.
-Cloud providers primarily run Linux and open source systems management tools
-Power and cooling infrastructure are extremely efficient
-Delay purchasing the latest and greatest x86 procesors. The first six months of new Intel or AMD server has a premium price. Do you need that performance? Wait six months? Price drop. Another six months? Another drop.

A data center manager that works for a public entity in Southern California said his tactic for reducing data center operating costs is to put as much into capital expenses as possible. When he buys a server, he pays for five years of maintenance support up front, turning what would normally be considered Op-Ex in into Cap-Ex.

For more on data center cost reduction, check out our recent article on IT Cost analysis tools.


November 24, 2010  2:56 PM

Fujitsu turns blade server on its head



Posted by: Alex Barrett
Blade servers, BX400, Fujitsu

Fujitsu blade server on wheelsIn a twist on blade server design, Fujitsu announced a new blade server enclosure today that can, in a pinch, be turned 90 degrees and rolled under a desk.

The new PRIMERGY BX400 is the little brother to the 10U BX900, and is designed for two scenarios. Like most blade enclosures, the BX400 can be deployed in a traditional data center. Alternately, for organizations with limited space and a low tolerance for noise, the BX400 can be equipped with a floor stand kit that tips it on its side, and then placed in an office or retail location.

“It’s extremely easy to use in existing environments, with no power or cooling modifications,” said Manual Martull, Fujitsu senior director of marketing.

Characteristics of the BX400 that make it suitable outside of the data center include low power consumption and low noise emmissions (45 decibels) relative to other blades. The floor standing variant also comes with a top-mount LCD module, and a lockable front door.

In other respects, the BX4000 is a relatively normal blade enclosure designed for shops that want to centralize and simplify their IT resources. The 6U enclosure can up equipped with a total of eight two-socket Intel Xeon 5500 and 5600 server blades and ten-drive SX960 or SX940 storage blades. In addition, customers can opt for a centralized shared storage option in PRIMERGY VSX, a version of NetApp’s Data ONTAP-v storage software that runs as a virtual appliance on a dedicated blade.

It’s that combination — centralized shared storage plus suitability for non-data center environments – that Martull said Fujitsu hopes will help it penetrate small to medium businesses. He said these so-called SMBs represent the biggest growth area for blades.  

But if a tree falls in a forest and no one is around to hear it, does it make a noise? Fujitsu’s presence in North America at least is tiny, Martull admitted, with about 1% of market share.

“Our challenge in North America has been brand awareness, and competing against other strong American brands,” he said. In the coming year, look to Fujitsu to grow its North American sales team, and expand its midmarket channel presence through distributors like Tech Data and CDW.


November 10, 2010  10:41 PM

Cfengine rolls out Nova 2.0, pushes users toward commercial offering



Posted by: Matt Stansberry
cfengine, DataCenter, open source, Systems Management

The commercial entity behind Cfengine, a popular open source data center automation and configuration tool, recently rolled out a new version of its proprietary offering Cfengine Nova 2.0.

Cfengine has been around since the early 1990s, and boasts some large customers, including companies such as eBay and Google, and is trying now to capitalize on its customer base by converting users to the pay-version.

Free open source systems management tools typically aren’t as automated or easy to understand as their commercial counterparts, so companies need to have a fairly skilled systems admins on staff to make them work. Also many of the proprietary version of these tools include virtualization-specific features, not included in the free versions.

Open source systems management vendors have a lot of levers they can use to convince a data center manager to buy the proprietary or supported version of their software. But you might not think geography would factor into that decision making process.

That is, unless you’re James Genus, System Administrator at Bigelow Laboratory for Ocean Sciences, located in coastal Maine.

The lab’s primary research focus is on the biological productivity in the world’s oceans and it supports genomic sequencing of single cell organisms, which requires some hefty compute power.

“The amount of data the scientists are producing is staggering,” Genus said. “The systems run almost constantly. As this increases, we need to make sure the environment is as stable as possible.”

Genus has been with the lab nearly a decade, and has used the open source version of CFengine for eight years. “I inherited an array of IT platforms and it was a headache,” Genus said. “If I had not found CFengine, I would not be in IT. I want [the scientists] to be able to come in, sit down and work.”

Bigelow recently moved from the open source version of CFengine to its commercial counterpart, CFengine Nova. And according to Genus, the lab’s location was a big factor in that decision.

Researchers are drawn to remote locations like Maine’s Boothbay Harbor, to get away from society and bureaucracy, to get things done, Genus said. But operating in a very isolated location has its risks. Power can go out for three weeks after an ice storm, and self-sufficiency is important.

“There are not many IT resources in Maine, as there are in New York, Massachusetts or San Francisco. Nobody I’ve talked to in this state is using CFengine,” he said. “It baffled me, even going to Red Hat training sessions in Boston, I’d only met a few people who used it, a few people who understood it. Using CFengine Nova, it’s easier for people to wrap their heads around and put it into action.”

Bigelow is largely a Red Hat Linux shop, and Genus said his team uses some of the free open source tools like Satellite and Spacewalk, “but they weren’t up to par with CFengine,” Genus said. “They don’t do the proactive fixing. We use CFengine to make sure services are up and running, make sure services are configured correctly. If something breaks, they recover automatically.”

Genus said he’s heard about CFengine’s competitors in the open source space, Puppet and Chef, but has not looked at them in detail. He said he’s happy about the tool’s ability to scale well.

Which is a good thing, since the lab’s environment is about to get a lot bigger in the next couple years. Genus said Bigelow has plans to build a new mobile data center pod: 22 racks at 50U each, which once virtualized will scale to 3,000 potential nodes.

Check out our Open Source Systems Management tool slideshow for more info.


November 3, 2010  3:52 PM

Gartner: Data storage growth is the top challenge for IT organizations



Posted by: NicoleH
Data Center, data center challenges, Gartner, Nicole Harding, SearchDataCenter.com

 

 

 

According to Gartner’s recent survey on data center infrastructure challenges and trends, the top three challenges for large enterprises are data growth, followed by system performance and scalability, and network congestion and connectivity architecture.

 

Continued »


October 28, 2010  3:33 PM

IT execs wave $50 billion budget purse to drive cloud computing standards



Posted by: Matt Stansberry
cloud computing, DataCenter, Virtualization

Intel and a group of high profile data center managers are banding together to drive cloud computing standards and interoperability. The organization, called the Open Data Center Alliance, boasts a handful of really big names, including BMW, Shell, JP Morgan Chase, and Lockheed Martin.

Andrew Feig, executive director in the infrastructure group at global financial services firm UBS is on the new organization’s steering committee. He said his motivation to participate is to get better utilization out of IT infrastructure.

“Every six months we’re getting more powerful servers and without a cloud model these efficiency gains will go out the window,” Feig said. “If you have one app running on a four year old server, and move it to a brand new server, your utilization goes from 10% to 2% on the new machine. You have to virtualize, but that’s only the first step.

“The cloud is that next evolution, completely abstracting the physical hardware from what’s running on it,” Feig said. “Virtualization is an enabler of that, but you need the intelligence to get the most out of the compute state. Virtualization is a halfway point. Getting virtualization working wasn’t easy, and getting cloud working is even more complicated.”

Feig said UBS is currently building out its internal private cloud capabilities, “But we don’t want a big integration curve if we want to look outside.” Feig said. “We’re looking for more standards to allow us to consume [public cloud]. Right now it’s a difficult meal to eat. Early adopters get locked in, and there is big pain switching later.

“There are a lot of big financial companies already using public cloud. Its not 20 years out,” Feig said. “If someone is doing a marketing launch or needs to scale up a rapid capacity for something on the Internet, why wouldn’t you do it on a public cloud?”

Cloud vendor interoperability is one of the major agenda items for the Open Data Center Alliance. Developing a standard to switch from one cloud vendor to another seamlessly isn’t going to be easy. But money talks.

“The buying power of the membership is front and center — $50 billion and growing,” Feig said. “This isn’t going to take seven years to get a standard ratified. You’re either compliant or your not. I want to have an erector set of choices.”

Intel is the organizing force behind the group as a non-voting technical advisor. The Open Data Center Alliance will offer its first public roadmap in Q1 2011.


October 20, 2010  3:12 PM

Converged infrastructure certifications come online



Posted by: badarrow
Barbara Darrow, Cisco, converged data center infrastructure, data center servers, Hewlett Packard, IT certification, training

 With Hewlett-Packard, Cisco and others preaching the benefits of converged data center infrastructure–packaging up compute server, networking and storage together–certifications for IT pros had to follow.

This week, HP announced what it claims is the first program dedicated to training IT pros specifically for running and maintaining converged infrastructure.  The courses will be delivered at more than 150 learning centers, the company said. The  HP ExpertONE converged infrastructure certification program  requires “both business technology and process competencies and [is] the first architectural level certification for converged infrastructure,” said Rebekah Harvey, HP’s director of learning product management. Continued »


October 15, 2010  5:16 PM

BMC improves inline software upgrade process, rolls out Control M v7



Posted by: Matt Stansberry
BMC, DataCenter, Systems Management

BMC recently released a new version of its job scheduler, Control-M 7, with new features to help manage IT workloads in the cloud. The software has come a long way from prioritizing and scheduling batch computing jobs on the mainframe a few decades ago.

Modern job scheduling tools run in the distributed, virtualized and now cloud computing environments. And vendors started rebranding job schedulers as “workload automation” to make it sound more exciting.

John Strege, director of capacity and enterprise software at the Chicago Board of Options Exchange, said his organization has used Control-M for over a decade for mainframe batch processing. Today he’s using Control M to start up and shut down the Unix boxes in CBOE’s QA certification tier, which is basically a mirror of the exchange’s production trading system.

Strege said CBOE is using Control M version 6.4. He’s looked at Control M 7, and said his support services group is anxious to use the new features. Strege expects to upgrade in the January timeframe. “We try to stay current with our software, but we don’t usually deploy something in the first few months it is released.”

BMC execs said for this Control-M release, the company worked hard to streamline the inline install and upgrade process. According to BMC, users are reporting reduced time and effort by about 90%.

“In simple terms, the new version pulls the data out of the old version without any downtime,” said Control M senior product manager Saar Schwartz. “What this means is that users can continue to work on the old release while the upgrade is taking place, with little to no downtime.”

This is good news for users who have complained in the past about systems management software updates from the Big Four spanning months or even years, and requiring an army of professional services people to get the job done.

“IT management software is notoriously difficult to install,” said Michael Cote, an analyst with Redmonk. “No one wants to manage their management software.”

Strege said the simplified update process could be a driver for CBOE to upgrade sooner. But this update is going to be a bit more complex than previous projects.

“We’ve been running Control-M for Solaris 10 on SPARC servers, and we’ll probably go to Linux on x86 for the next update,” Strege said. “We’ve been planning to go with x86 servers for a lot of our applications for over a year now. And for some of them we’re sticking with Solaris on x86. But in the case of a lot of BMC products, they don’t support Solaris on x86, so Linux is our only choice for a lot of these packages.

“An OS change makes upgrade more difficult,” Strege continued. “We haven’t done it yet with BMC products, but we’ve done it with a couple of our other products. If the software migration tools are good, shouldn’t make a big difference.”


October 6, 2010  5:40 PM

Data center services stocks take it on the chin



Posted by: badarrow
Akamai, Barbara Darrow, cloud computing, co-location, Data Center, Equinix, Savvis, Terramark

Comments from Equinix and Savvis spooked Wall Street late Tuesday and this morning, body-slamming those stocks as well as shares of companies selling similar services.

Continued »


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: