Early adopters of blade servers, in general, didn’t give blades rave reviews. As happens with many new technologies, stories about users getting burned — in this case almost literally, as early blades ran hot, hot, hot — have circulated and made some IT directors shy away from buying them.
Well, Burton Group analyst Chris Wolf tells me that times have changed, and buyers of 2007 blade server models should have a completely different experience. I talked to Chris at this week’s TechTarget Server Virtualization Seminar in San Francisco. He says that new products have dealt the early overheating, power, management and configuration problems.
SearchServerVirtualization.com’s resident blades expert, Barb Goldworm, has been spreading the word that old blade power, cooling and management issues are being addressed by vendors. Says Goldworm:
“Blade vendors have made great progress in the current generations of blade systems, improving power and cooling efficiencies significantly. IBM says that they have increased the efficiency of their power supplies from 65% efficiency in their 1U servers to 91% efficiency in BladeCenter H. (65% efficiency means that they convert 65 % of the power at the wall and the rest goes into room in the form of heat. In other words, for each 1kW from the power provider, 650 W goes to the server, and 350 W goes into room). Blade vendors are continuing to work on heat issues, with a variety of options for cooling at the blade, rack and room level.”
IT managers need some convincing, however. Rack servers do a good job, they said in our recent 2007 Server Decisions Survey, and most of the 250+ respondents plan to buy more rack servers this year. Less that one-fourth will buy blades this year. Watch for more news about blades and that survey, as I and Matt Stansberry (editor of SearchDataCenter.com) will be presenting the results at the Server Blade Summit on May 1.
Another survey, by Enterprise Associates, looked beyond 2007 and and indicated that blades will be widely adopted by 2009..
Let’s don’t rely on surveys, though. How about sharing some first-hand experience? Are you one of those who will buy new blades after the technology matures, or did you buy early? Tell me about it in a comment here or via email at firstname.lastname@example.org.
Thanks to Tim for suggesting these links for our readers:
- Microsoft builds on Windows Server Datacenter Edition’s reliability and scalability with unlimited virtualization rights
This link (from the Microsoft Website) explains the changes to DataCenter Edition that took place as of October 1st, 2006.
A collection of links to Microsoft resources on their licensing policies and changes.
After reading Alex Barrett’s post Microsoft being Microsoft, I was curious what our virtualization experts thought of the whole ordeal. Here’s what they said.
Microsoft has a long history of delaying, and then delaying again their products. One could hold out hope that this is to keep the products in a controlled test environment long enough to eliminate all bugs and produce a better product. However, Microsoft also has a history of producing products that are bug-ridden and full of annoyances and incompatibilities with previous releases. That said, they also have a history of increasing the stability and usability of their products over time with patches and service packs. Windows XP SP2 is the most stable OS I have ever used. Microsoft is not alone in the software world when it comes to releasing buggy code, they just seem to have the most scrutiny. I think that Windows Server 2007, whenever it is released, will have its share of the expected issues that people have come to know and associate with Microsoft. However, I think that future patches and updates will turn into a server OS that will be a proud successor to Windows 2003. Should Microsoft delay it? History says it won’t matter, but there is a first time for everything.
-VMware expert Andrew Kutz via email
According to the article in question, it sounds like the delay is not that significant. My theory is that this was done to allow some synchronization of code between it and what’s going into the updated Server Longhorn beta. Another thing comes to mind when you said “controlled test environment”. There’s a lot of things in Virtual Server that are going to become key technologies in Longhorn Server, and the better they have those things working correctly ahead of time, the easier it’ll be to build the bigger and more important infrastructure of Longhorn around them. VS 2005 R2 SP1 has been hanging over people’s heads for a while, and I suspect they wouldn’t do that unless a) they wanted to make sure installing it didn’t sour people’s experiences with VS 2005 in general, and b) it contained some key things that were going to be rolled forward into Longhorn, and they wanted to make sure those pieces worked properly right now.
-Microsoft Virtual Server expert Serdar Yegulalp via email
You can read virtualization management expert Anil Desai’s thoughts on his first blog post, Viridian – Better Late than… Early?
Unwilling to cede the market to VMware, Virtual Iron is getting in to the VDI (virtual desktop infrastructure ) game by partnering with Provision Networks, maker of Virtual Access Suite (VAS) connection broker software.
The problem with traditional fat clients that Virtual Iron and Provision are trying to eliminate are numerous: they consume a lot of power, they’re a security and regulatory problem, and “they’re an absolute nightmare to administer,” said Mike Grandinetti, chief marketing officer at Virtual Iron. Server-based hosted desktops, however, suffer from none of those problems, and benefit from additional high availability features. “We’re believers that [VDI] could absolutely take a large and rapidly growing server virtualization market and take it to a different level,” Grandinetti said.
As is often the case with Virtual Iron announcements, the announcement centers heavily on price. Rather than price per node, Virtual Iron and Provision are opting to charge $120 per desktop. That compares quite favorably with VDI prices offered by VMware and Hewlett-Packard, which don’t include a connection broker. Compared with traditional desktops, meanwhile, a Virtual Iron/Provision VDI bundle has a total cost of ownership of less than half, according to Gartner data.
Grandinetti claims the two companies are working with several “proof of concepts” with companies whose total desktops number in the tens of thousands.
The big news today is that the Hypervisor for Windows Server Longhorn (codenamed Viridian) is being delayed. See the Windows Server Division Blog for the main details. On the surface it seems like this is bad news. Many of us would like to move to the new Hypervisor ASAP. From a strategic standpoint, I think it’s most important for Microsoft to ship a rock-solid first version of the Longhorn Server Hypervisor. So is Microsoft doing the right thing?
When I first heard about the goals for Viridian (which were then pretty closely guarded), I thought that this “feature” was enough to warrant a new release the Windows Server platform (post-Longhorn Server). At the very least, it could have commanded a “Virtualization Edition” of the product. It’s no small architectural task to provide for the dynamic addition of hardware, support for large parallel processing, dozens of VMs, etc. Goals include the ability to easily transition from at least Microsoft Virtual Server (and, perhaps, competing solutions – the names of which may or may not begin with the letters “V” or “X”). And, it’s a new product – this isn’t a rebranding of another platform.
Microsoft states that scalability (and related testing) is a major reason for the delays. Given that quality (measuring by reliability, stability, performance, etc.) is not up for compromise, I would have preferred to see Microsoft do a smaller initial release of the Hypervisor. Perhaps an initial “lite” version that focused on the architecture of the product would have been a better approach. While running on 64-CPU machines is definitely a plus, I’d rather have a more scaled-down version of the Hypervisor available earlier. That would allow for determining migration paths, better understanding capacity planning, and for performing initial testing. With that available, Microsoft could then focus on scalability to very large environments.
Regardless of the release timing, I think we’ll all eventually look back and say that Viridian was worth the wait. Until then, we’ll have to rely on the many first- and third-party products that are available today.
A couple of months ago at IDC’s Virtualization Forum in New York City, I chatted over lunch with Al Gillen, who posited that not only would Microsoft probably not be late with Viridian, code name for Windows Server Virtualization that will ship with Longhorn, but that Microsoft might actually be early! A pessimist by nature, I was skeptical.
Today, we learn once again what we all know all too well: that the best predictor of future performance is past performance. Like so many products before them, Microsoft announced today that they are pushing back the betas for not one, but two, of their virtualization offerings: Windows Server Virtualization and Virtual Server 2005 R2 SP1.
Now, Mike Neil, Microsoft’s GM for virtualization strategy, gave some pretty good excuses for the delays: 64-processor systems, I/O intensive workloads, new operating system support, etc. But the delays bring up all sorts of other questions. Is this in fact the last delay Viridian is going to face? If it does ship on time, will it include all the nifty features Microsoft has been touting? Will customers holding out for Microsoft finally give up and try out VMware or Xen? Is this delay really as bad for Microsoft virtualization as it seems to me?
You can read details of the delays here.
I found another blog post-worthy blog. Rightfully called “Documenting a virtualization project“, it’s pretty darn cool. Read about one company’s experience with virtualizing their servers from the start. Most recently, the author, (Martin?) reported that they company (which remains namelesS) has 75 servers virtualized at approximately a 20:1 ratio, and 25 servers to go. They seem to be doing a lot with VDI and VMware, so if that’s your forte I highly suggest becoming a frequent visitor to this blog (after ours, of course.)
Throughout the blog, he talks about migrating Oracle servers, their VDI project, their first production HA failover:
“A quite unexpected event yesterday was the very first HA failover in production.”
The day the Oracle servers froze:
“I shoudn’t be writing that all is well on the Oracle front.
“Just now two of the Oracle servers froze with database problems. The DBA tells me that they have had block corrupts which he hasn’t seen in five years of running the things.”
Then he goes on to blog about what they learned from the Oracle freeze:
“…memory settings turned out to be highly critical in relation to the performance of the VM.”
…I guess he should have been paying more attention to his SearchServerVirtualization.com Virtualization Advisor e-newsletters. 😉
I was surfing the virtualization blogs recently and came across a gem of a Webpage. Virtualization Daily has put together a virtualization bookstore of sorts — the books link to Amazon, but it’s a great way to get a fast glance at the books and decide what you want from there. Check out the book store here.
Robin Harris of StorageMojo has procured a VMware price list, adding to his extensive collection of storage products. You can find it at:
According to the Price List introduction, “These prices are discounted “street prices”, roughly what a corporate customer would pay.” It’s unclear to me who submitted these price lists, and how current they are, but they might be useful nevertheless.
The VirtualCenter2 management service (vpxd) is not very robust. If it cannot connect to the back-end database, the service will halt. It *should* continue to run, periodically trying to connect to the database, but this is not the case. There are also problems with the service coming up before the network when the server boots, causing the service to halt upon start. IPSec SA token mismatch/renewal issues also cause the service to halt. The vpxd service is very important — it manages DRS, collects performance statistics, and allows users to manage their VMs. This is a service that should be a lot more capable of handling foreseeable circumstances. To that end I have written a script that can be used to restart the vpxd service in case it halts or fails to start. This script can be linked to an often underutilized feature of Windows — the ability for the Service Control Manager (SCM) to restart a service upon failure.
The script is fairly basic. I will not post it in its entirety in this blog because its formatting will get munged by WordPress’ draconian style settings. You can download the script from www.lostcreations.com.
A description of the script can be found in the script’s source, “This script will attempt to start the vpxd service if it is not started. If the serivce is already started or it starts successfully the first time no further action is taken. This script can also be run upon a VCMS failure. It will notify a specified e-mail address of the failure. It will check the connection to the VCMS database. If the database connection is valid then the script will start the VCMS service. If the connection is not valid then the script will go into a loop, attempting to restart the VCMS service every specified number of minutes. This script assumes your VC database is on a SQL server. If it is not then please see www.carlprothman.net for a good reference on how to build an ADO connection string to fit your needs.”
In addition to using this script to correct a vpxd service failure, it can also be run as a scheduled task, set to run 5 minutes after the server boots. This can correct the problem of the vpxd service not starting successfully on boot because it comes up before the network is available.
I hope this script helps to make your VMware VirtualCenter2 Management Server a little more robust, and a lot more script-diddly-licious!