In this quick Q&A, analyst and SearchServerVirtualization.com blade server columnist Barb Goldworm offers her views on the news from vendors and users at last week’s Server Blade Summit, which she chaired.
SSV: How big a deterrent to buying blade servers is power and cooling, based on your observations at the Summit? What cool things are being done about it?
Goldworm: Power and cooling and space are issues for most users, even in trying to expand their rack-n-stacks. Many of them were there because they know they have to do SOMETHING, because they can’t go on like they are. Often there is a list of easy (and not expensive) steps which can be taken, before going to more drastic measures (like liquid cooling). Planning help is available from folks like Eaton and APC, as well as HP and IBM, and others. Advances in hardware and software are continuing to come, with smarter power management, shutting down unneeded processors based on utilization, etc. Processing power per watt is continuing to improve.
SSV: Were virtual desktops — via appliance virtualization, VDI (virtual desktop infrastructure) and other models — hotter than you thought, in terms of interest?
We expected virtual desktops to be a hot topic and it was. As people get more comfortable with server virtualization, and start looking at Vista on desktops, virtualization for the desktop and applications are becoming a serious topic. I view this area as a continuum, with different approaches offering benefits for different use cases (from VDI to Citrix to the new IBM workstation blade). I think we’re hitting the tip of the iceberg here.
It’s hot and users are struggling to understand how it all fits together.
SSV: Looking back at the Summit, what are your overall impressions about the state of blades and virtualization after the Summit?
Goldworm: People have been hearing more about blades for the past year or two, often with a lot of warnings. Many came to the summit looking to get a better understanding of the benefits and the “gotcha’s” and were pleasantly surprised with the progress made in the past year, particularly relative to virtualization. Many of the customers we spoke with were very excited about the benefits that blades and virtualization could bring them, and many seemed to be hearing up-to-date information for the first time (including from their own vendors like IBM, HP and VMware).
As users and channel partners are getting more educated, we will see more and more of the marriage between blades and virtualization.
A nice forums fellow felt like posting a whole bunch o’links to the VMware forums. Very nice. Go check it out and clicky. http://www.vmware.com/community/thread.jspa?threadID=81191
Why use blade servers when your rack servers aren’t giving you any hassles? I met up with Craig Newell at the Server Blade Summit this week, and he gave some answers to that question. I’ve put them in the list below.
Newell has more field experience working with blades than anyone else I met at the summit. As U.S. Client Services Manager for Halian, Inc., — a U.K.-based global IT services organization –he has worked on blades implementations in banking, pharmaceutical, government and other types of businesses.
The top 5 reasons to use blade servers:
1. They’re tiny. Blades conserve data center floor space better than any other server option. If your floor space is at a premium, then check out blades.
2. They’re dense. Combined with virtualization, blades give you the most compute power per square inch of any server.
3. They’re easy to deploy. Today’s blade server toolsets allow for ease of server deployments. The cabling, power and much more are built into the chassis, so there’s less to do when you slip the box in its slot. Virtualize and speed of deployment increases more.
4. They’re a good fit for lab environments. “Blades and virtual servers provide great architectures for lab, testing, and development environments,” Newell said.
5. There will be no more snakes on your plane! Those cables roping around your data center will disappear, as blades have far fewer power and network cables.
Put all these uses and benefits together. Mix well. Then, watch TCO get TKOed. Typically, corporate processes significantly increase overall server deployment time, leaving you with lower overall total cost of ownership, even though upfront costs may be higher.
Here’s the big if, and and but:
“Power and cooling concerns are real! The power consumption/square foot in a blade-based data center are significant…like 25,000 watts per chassis.”
So, do your homework, and evaluate cooling requirements and power consumption as a part of your overall cost for hardware deployment.
“Returns take numerous years due to the significant capital required within a data center environment,” Newell said. “Smaller environments may see faster returns.”
In other words, good things come to those who plan, deploy and wait.
Want more info on reasons to or not to use blades? Check out these links:
Why wed blade servers to virtualization?; Barb Goldworm’s guide to blades and virtualization; Former Morgan Stanley exec praises blades; and Blade servers dominate market by 2009.
VMware kindly invited me for the tail-end of its annual Analyst Day held at the Charles Hotel in Cambridge, Mass. yesterday, where I got to see some product demos and drink a Mojito. And while I would much rather have sat in on the NDA briefings with which they regaled the analyst community, it was nevertheless an interesting visit.
The biggest thing I learned at Analyst Day was the VDI is in fact, not doomed. A VDI demo by Karthik Balachandran, a VMware senior consultant, put all my fears of a terrible VDI end user experience to rest. Karthik showed to me that, indeed, working on a Wyse thin client was really no different than me typing away at my traditional “fat client” desktop. Granted, I didn’t try to download any video, as the thin client didn’t have an Internet connection. Plus, he confirmed that video and VDI, for now, wouldn’t have worked so well, since we’d have to transmit all that video over Ethernet. But for simple office applications, I can attest to the fact that the VDI user experience is more than adequate.
Karthik assured me that VDI will work well with a network latency of up to 150 milliseconds, and told me that there are several VDI proof-of-concepts that go from coast to coast, and even a couple of companies doing intercontinental VDI!
Another cool thing Karthik showed me is that by suspending my desktop at the end of the day rather than by logging off, I could go home, log in again, and find my desktop in the exact same state — with all my applications running exactly as when I left them. Intellectually, I knew this was possible, but it had never really dawned on me how cool it was until I saw it. Kind of like Firefox’s session manager, only better.
Then there’s the issue of mobility. Today, VDI is a non-starter for anyone that travels and wants to take their desktop with them — like, oh, 99% of “knowledge workers.” Rest assured, VMware is on the case. Karthik and I talked — hypothetically of course — about the eventual integration of VDI with VMware ACE, whose “pocket ace” feature lets you save your VM to an external USB drive. Mobility problem solved.
Last but not least, Karthik showed me some features of VMware’s own connection broker VDM. He showed me how you could set up “sticky” or dynamic desktops, and do things like assign desktop leases or dynamically grow your desktop pool. Cool stuff. VDM is currently in its 1.0 incarnation, and 2.0 will ship by the end of the year, complete with functionality the company picked up in the Propero acquisition. What features specifically will come from Propero, he couldn’t tell me. Oh well, something to look forward to.
TransUnion Interactive, a direct-to-consumer credit reporting bureau, is running a proof of concept (POC) project to determine if virtualization could deliver server and space reductions and better TCO than its current infrastructure.
I met Daniel Hahn, TranUnion Interactive (TI) associate director of technical services, today at the Blade Server Summit in Anaheim, Calif. He’s heading the POC project and described it during a user panel I moderated.
TI has too many servers that are costing too much money, Hahn said. Over 200 servers and operating system licenses. Of the latter, 80% were Red Hat Enterprise Linux and 20% Windows Server, with 15% of each of these OSes running in production.
“We’re not replacing any servers at this point. We’re taking two repurposed servers and putting virtualization software on them and chopping them up into virtual servers.”
The POC architecture was built on VMware Server 2.0, IBM 3455 servers, each with two dual-core Opteron CPUs at 1.8 GHz, 4GB of memory. RHEL Enterprise Server 3 is the host OS.
The POC team isn’t working with production servers yet, just Web and application servers. Databases, for instance, are not involved.
The results, so far, are dramatic.
“From a corporate standpoint, we’re saving 65% just on Red Hat licensing,” Hahn said. Also, there’s are substantial saving in price for TI’s two main apps, BEA and Resin.
TCO is yet to be determined, but Hahn has already seen systems management cost reductions of 30% in the POC project.
“We have a staff of people that has to physically reinstall servers constantly. We’ve reduced that to a couple of mouse clicks.”
Once this project is completed, Hahn is sure that a virtualization rollout will occur, probably using VMware ESX as the platform.
(Ironically, the POC project wasn’t done on blade servers. So, tell me your blades and virtualization stories, please, at email@example.com)
Centralizing business desktops using virtualization technologies is a good idea, but its time hasn’t come, according to Mike Neil, Microsoft general manager for virtualization strategy.
I had a one-on-one interview with Mike following his keynote today at the Server Blade Summit in Anaheim, Calif. We covered a lot of topics, but this summation of the current state of virtual desktop technologies stood out for me for this reason: I’ve heard a slew of vendors touting their virtual desktop technologies here, and I see too many choices and too few that are proven to work and save money for IT organizations.
Here’s what Mike Neil had to say on this subject:
“We see a couple of different use scenarios emerging right now, based on Terminal Server and virtual machines (VMs).
“Obviously, today Terminal Server is a widely used for centralizing applications in the data center and remotely presenting and accessing them, and we’ve done a lot of work in Longhorn Server to enhance capabilities like access from outside the firewall. Terminal Server via our partnership with Citrix is behind most of the centralized desktop deployments today.
“The emerging technology that’s interesting is using virtual machines with centralized enterprise desktop licensing to enable that. We see three scenarios in the virtual machine side emerging.
“One is that you, the user, get a virtual machine on a specific server. You connect to it from a thin or rich client environment. A lot of people are doing that today.
“The other is using a connection broker. Your desktop connects to the connection broker which spins up a virtual machine within the pool of physical servers that allow you to connect to applications. That environment can be more dynamic.
“A third approach, which is more on the edge, is a model where you connect to the connection broker, and it creates from scratch a VM for you. Then, you use technology like SoftGrid to stream applications down into that environment.
“All of those right now I would characterize as being cutting edge scenarios. Any company doing it has the primary goal of cutting operational costs of their desktop systems and deal with compliance issues, to put data into the data center where it can be controlled, backed up and put in disaster recovery scenarios.
“There isn’t anything that has come out that can claim to be the ultimate architecture with a spreadsheet that says, ‘Here are the multi-year cost savings associated with this scenario.’ I would advise companies to use great caution or wait-and-see which technologies are proven.”
Virtualization is the primary reason why IT shops are buying and using blade servers today, and that’s a big impetus but there’s got to be more, said Andrew Kutz, Butler Group analyst and SearchServerVirtualization.com site expert, during the TechTarget Ask the Expert session at the Server Blade Summit last night.
That’s the gist of his response to an audience question about what’s needed to push blade server adoption, but Andrew’s actual words say it better:
“Right now, virtualization is the overshadowing element in the blade market. And I think — just like in “Battlestar Gallactica” where the son has to step outside of the shadow of his father — I think that blades have to step outside the shadow of virtualization. Blades have to define themselves outside of the realm of virtualization, because virtualization is the rock star of the moment. Blades are Keith Richards to virtualization’s Mick Jagger, and Keith Richards, man, is awesome.”
I’m with Andrew all the way on this one.
I know they’re here, and today I’m going to meet some in person. For sure. I’m talking about blade server users. I’m at the Server Blade Summit in Anaheim, Calif. On opening day, yesterday, I scanned many-a-badge and identified and talked to 13 IT managers. Not one of them has a single blade server in their data centers. They’re interested, and they’re learning lots from the sessions here, but their hands-on experience is nil. Their big challenge, they say, is convincing the budgeters that blades are worth the extra upfront dollars and trying to soothe worries about blades’ reputations as power-sucking hot boxes.
Power and cooling issues are the number one barriers to blade adoption, according to a new survey we’ve conducted. In a TechTarget Ask the Expert session here, I presented the results of that survey (conducted by TechTarget’s Data Center Media Group), and my panelists agreed that P&C was a big issue in the past, but that new blades are much, much better. My panelists were Focus Consulting analysts Barb Goldworm and Ann Skamarock and Burton Group analyst Andrew Kutz. Barb and Andrew are resident experts on SearchServerVirtualization.com. Barb has tackled the P&C issue in columns for our site.
You’ll be hearing more about the Ask the Expert session, which was rowdy and informative and well-attended, but to the point of this post: I couldn’t find any (non-vendor) users in the audience who were using blades.
Today, I’m moderating a panel discussion called “User Experiences with Blades and Virtualization”, so I know I’m going to talk to ACTUAL BLADE USERS. They’re on the panel.
Meanwhile, if you’re using blades now, let me know about it (firstname.lastname@example.org). I’m tired of scanning badges.
If it’s not one thing, it’s another. Today, it’s data centers’ power and cooling hassles. Tomorrow, according to researcher Jerry Murphy, “your next big problem will be managing your service-oriented architectures (SOAs).” Murphy, senior vice president and service director for Robert Francis Group, thinks out-of-control SOAs could be a bigger problem than power and cooling.
Murphy advises IT managers to get control of their SOAs today, or else your “SOAs will have gaping huge security holes through which you could drive a truck.”
Here at the the Server Blade Summit in Anaheim, Calif., the main talking points are blades and virtualization; but, in a session on proactive data center management, Murphy looked beyond immediate problems of power and cooling on blades to a future when, he predicts, a service glut will cause similar problems as those caused by server proliferation.
“Services are even harder to track than servers. They’re dynamic and flexible, but also complex. It’s harder to find out why a service fails.”
SOAs are a blessing and a curse, according to Murphy. They offer reusable code and better integration, in addition to agility in moving services as needed. But that agility makes it hard to predict where services will be moved to.
“SOA design is focused from the top-down, and infrastructure is built from the bottom up,” Murphy said. To gain better control, IT managers need to take SOAs’ top-down design and integrate it with bottom-up modeling tools. Also, he said to be conservative on capacity requirements and proactive with ongoing performance monitoring at different tiers in the infrastructure.
As seen by most IT managers, blade servers are hot little power-suckers. Cooling hassles and power costs are the main reasons why IT managers don’t buy blade servers, according to the new TechTarget Data Center Group survey of over 250 IT professionals. In fact, most of the respondents’ companies aren’t using blades today. I’ll be presenting that bad news and some good news from our 2007 Server Decisions Survey next week at the Server Blade Summit in Anaheim, Calif., which focuses on blades and virtualization.
While there I’ll be asking blades proponents, users and prospective users about power and cooling issues with blades. Just to give you a preview, here’s a quick look at both sides of the story.
In a recent Q&A with Focus Consulting president Barb Goldworm, I recalled that an IT manager told me he ran only half the number of blades a chassis could hold because the servers would overheat otherwise. Goldworm, author of a new book on blades and virtualization, responded, saying:
“In the earlier days of blades, cooling was a big issue, and many users ran half loaded. The past year has seen significant improvement in power and cooling efficiencies and management. In some data centers, cooling may be an issue; but, in many datacenters, there are lots of things that can be done to improve cooling and allow blades to be easily incorporated in the datacenter. In addition, chip, blade and power/cooling vendors are still working on this issue, with improvements continuing to come.”
Vendors agree with Goldworm, says TechTarget news writer
Bridget Botelho, and say that blades throw off less heat than traditional rack servers.
IT managers tell us a different story. After visiting a number of data centers, SearchDataCenter.com site editor Matt Stansberry sums up users’ experiences with blades servers:
“Per unit, one blade may technically throw off less heat than one rack server, but you clump them all together in a chassis, and they mess up your entire cooling strategy. It’s the same with power. Per blade unit, they demand less power, but the problem data center managers run into is that they can’t deliver all of that energy into 19 square inches.”
Plenty of vendors, consultants, users and prospective users of blades will be sounding off on this subject next week. I’ll let you know what they say. In the meantime, what do you have to say about these hot little power suckers? Tell me so I can tell the bladesters in Anaheim the real story. My email is email@example.com.