Reuters is reporting that VMware has filed with U.S. securities regulators to raise up to $100 million in an initial public offering of Class A common stock. As expected, EMC will retain approximately 90% of the company via ownership of non-public Class B shares, which are entitled to 10 votes per share, compared with one vote per share for the Class A offerings. The company has not decided whether to list on the New York Stock Exchange or on the NASDAQ. EMC CEO Joe Tucci has been quoted as saying the IPO could happen “as early as late June.”
I chatted with Vance Loiselle, the vice president of marketing at BladeLogic yesterday, who told me about BladeLogic’s new Virtualization Manager module for its Operations Manager suite, a configuration management tool. Talking with him gave me a little bit of insight in to what management software vendors mean when they say that traditional management tools aren’t equipped to deal with virtual environments.
Without the Virtualization Manager module, Loiselle explained, Operations Manager can “see” virtual machines, but not their specific hardware configuration, and their relationship with the underlying physical server. “We’ve always been able to see within the VM, but now we can see it on the physical server and drill down to see its CPU allocation, memory allocation, storage and network adapter configurations, and the virtualization software configuration,” he explained.
BladeLogic claims 220 customers including large enterprises such as Merck and GlaxoSmithKline. Fortune 1000 companies like those have held off moving virtualization in to production until they had the same management tools in place for virtual environments that they enjoy in the physical world, Loiselle said.
There’s word from SWsoft of a starter pack for its Virtuozzo operating system virtualization software. Designed for new users, the $1,198 bundle includes a license for single or dual-processor system, management tools, and a year of Silver level support and maintenance. Also included is VZP2V, SWsoft’s physical-to-virtual tool that helps import a a server from a dedicated physical box to a Virtuozzo virtual environment.
An SWsoft spokesperson claims that the starter pack represents savings between 33% and 50% off the full price.
Early adopters of blade servers, in general, didn’t give blades rave reviews. As happens with many new technologies, stories about users getting burned — in this case almost literally, as early blades ran hot, hot, hot — have circulated and made some IT directors shy away from buying them.
Well, Burton Group analyst Chris Wolf tells me that times have changed, and buyers of 2007 blade server models should have a completely different experience. I talked to Chris at this week’s TechTarget Server Virtualization Seminar in San Francisco. He says that new products have dealt the early overheating, power, management and configuration problems.
SearchServerVirtualization.com’s resident blades expert, Barb Goldworm, has been spreading the word that old blade power, cooling and management issues are being addressed by vendors. Says Goldworm:
“Blade vendors have made great progress in the current generations of blade systems, improving power and cooling efficiencies significantly. IBM says that they have increased the efficiency of their power supplies from 65% efficiency in their 1U servers to 91% efficiency in BladeCenter H. (65% efficiency means that they convert 65 % of the power at the wall and the rest goes into room in the form of heat. In other words, for each 1kW from the power provider, 650 W goes to the server, and 350 W goes into room). Blade vendors are continuing to work on heat issues, with a variety of options for cooling at the blade, rack and room level.”
IT managers need some convincing, however. Rack servers do a good job, they said in our recent 2007 Server Decisions Survey, and most of the 250+ respondents plan to buy more rack servers this year. Less that one-fourth will buy blades this year. Watch for more news about blades and that survey, as I and Matt Stansberry (editor of SearchDataCenter.com) will be presenting the results at the Server Blade Summit on May 1.
Another survey, by Enterprise Associates, looked beyond 2007 and and indicated that blades will be widely adopted by 2009..
Let’s don’t rely on surveys, though. How about sharing some first-hand experience? Are you one of those who will buy new blades after the technology matures, or did you buy early? Tell me about it in a comment here or via email at firstname.lastname@example.org.
Thanks to Tim for suggesting these links for our readers:
- Microsoft builds on Windows Server Datacenter Edition’s reliability and scalability with unlimited virtualization rights
This link (from the Microsoft Website) explains the changes to DataCenter Edition that took place as of October 1st, 2006.
A collection of links to Microsoft resources on their licensing policies and changes.
After reading Alex Barrett’s post Microsoft being Microsoft, I was curious what our virtualization experts thought of the whole ordeal. Here’s what they said.
Microsoft has a long history of delaying, and then delaying again their products. One could hold out hope that this is to keep the products in a controlled test environment long enough to eliminate all bugs and produce a better product. However, Microsoft also has a history of producing products that are bug-ridden and full of annoyances and incompatibilities with previous releases. That said, they also have a history of increasing the stability and usability of their products over time with patches and service packs. Windows XP SP2 is the most stable OS I have ever used. Microsoft is not alone in the software world when it comes to releasing buggy code, they just seem to have the most scrutiny. I think that Windows Server 2007, whenever it is released, will have its share of the expected issues that people have come to know and associate with Microsoft. However, I think that future patches and updates will turn into a server OS that will be a proud successor to Windows 2003. Should Microsoft delay it? History says it won’t matter, but there is a first time for everything.
-VMware expert Andrew Kutz via email
According to the article in question, it sounds like the delay is not that significant. My theory is that this was done to allow some synchronization of code between it and what’s going into the updated Server Longhorn beta. Another thing comes to mind when you said “controlled test environment”. There’s a lot of things in Virtual Server that are going to become key technologies in Longhorn Server, and the better they have those things working correctly ahead of time, the easier it’ll be to build the bigger and more important infrastructure of Longhorn around them. VS 2005 R2 SP1 has been hanging over people’s heads for a while, and I suspect they wouldn’t do that unless a) they wanted to make sure installing it didn’t sour people’s experiences with VS 2005 in general, and b) it contained some key things that were going to be rolled forward into Longhorn, and they wanted to make sure those pieces worked properly right now.
-Microsoft Virtual Server expert Serdar Yegulalp via email
You can read virtualization management expert Anil Desai’s thoughts on his first blog post, Viridian – Better Late than… Early?
Unwilling to cede the market to VMware, Virtual Iron is getting in to the VDI (virtual desktop infrastructure ) game by partnering with Provision Networks, maker of Virtual Access Suite (VAS) connection broker software.
The problem with traditional fat clients that Virtual Iron and Provision are trying to eliminate are numerous: they consume a lot of power, they’re a security and regulatory problem, and “they’re an absolute nightmare to administer,” said Mike Grandinetti, chief marketing officer at Virtual Iron. Server-based hosted desktops, however, suffer from none of those problems, and benefit from additional high availability features. “We’re believers that [VDI] could absolutely take a large and rapidly growing server virtualization market and take it to a different level,” Grandinetti said.
As is often the case with Virtual Iron announcements, the announcement centers heavily on price. Rather than price per node, Virtual Iron and Provision are opting to charge $120 per desktop. That compares quite favorably with VDI prices offered by VMware and Hewlett-Packard, which don’t include a connection broker. Compared with traditional desktops, meanwhile, a Virtual Iron/Provision VDI bundle has a total cost of ownership of less than half, according to Gartner data.
Grandinetti claims the two companies are working with several “proof of concepts” with companies whose total desktops number in the tens of thousands.
The big news today is that the Hypervisor for Windows Server Longhorn (codenamed Viridian) is being delayed. See the Windows Server Division Blog for the main details. On the surface it seems like this is bad news. Many of us would like to move to the new Hypervisor ASAP. From a strategic standpoint, I think it’s most important for Microsoft to ship a rock-solid first version of the Longhorn Server Hypervisor. So is Microsoft doing the right thing?
When I first heard about the goals for Viridian (which were then pretty closely guarded), I thought that this “feature” was enough to warrant a new release the Windows Server platform (post-Longhorn Server). At the very least, it could have commanded a “Virtualization Edition” of the product. It’s no small architectural task to provide for the dynamic addition of hardware, support for large parallel processing, dozens of VMs, etc. Goals include the ability to easily transition from at least Microsoft Virtual Server (and, perhaps, competing solutions – the names of which may or may not begin with the letters “V” or “X”). And, it’s a new product – this isn’t a rebranding of another platform.
Microsoft states that scalability (and related testing) is a major reason for the delays. Given that quality (measuring by reliability, stability, performance, etc.) is not up for compromise, I would have preferred to see Microsoft do a smaller initial release of the Hypervisor. Perhaps an initial “lite” version that focused on the architecture of the product would have been a better approach. While running on 64-CPU machines is definitely a plus, I’d rather have a more scaled-down version of the Hypervisor available earlier. That would allow for determining migration paths, better understanding capacity planning, and for performing initial testing. With that available, Microsoft could then focus on scalability to very large environments.
Regardless of the release timing, I think we’ll all eventually look back and say that Viridian was worth the wait. Until then, we’ll have to rely on the many first- and third-party products that are available today.
A couple of months ago at IDC’s Virtualization Forum in New York City, I chatted over lunch with Al Gillen, who posited that not only would Microsoft probably not be late with Viridian, code name for Windows Server Virtualization that will ship with Longhorn, but that Microsoft might actually be early! A pessimist by nature, I was skeptical.
Today, we learn once again what we all know all too well: that the best predictor of future performance is past performance. Like so many products before them, Microsoft announced today that they are pushing back the betas for not one, but two, of their virtualization offerings: Windows Server Virtualization and Virtual Server 2005 R2 SP1.
Now, Mike Neil, Microsoft’s GM for virtualization strategy, gave some pretty good excuses for the delays: 64-processor systems, I/O intensive workloads, new operating system support, etc. But the delays bring up all sorts of other questions. Is this in fact the last delay Viridian is going to face? If it does ship on time, will it include all the nifty features Microsoft has been touting? Will customers holding out for Microsoft finally give up and try out VMware or Xen? Is this delay really as bad for Microsoft virtualization as it seems to me?
You can read details of the delays here.
I found another blog post-worthy blog. Rightfully called “Documenting a virtualization project“, it’s pretty darn cool. Read about one company’s experience with virtualizing their servers from the start. Most recently, the author, (Martin?) reported that they company (which remains namelesS) has 75 servers virtualized at approximately a 20:1 ratio, and 25 servers to go. They seem to be doing a lot with VDI and VMware, so if that’s your forte I highly suggest becoming a frequent visitor to this blog (after ours, of course.)
Throughout the blog, he talks about migrating Oracle servers, their VDI project, their first production HA failover:
“A quite unexpected event yesterday was the very first HA failover in production.”
The day the Oracle servers froze:
“I shoudn’t be writing that all is well on the Oracle front.
“Just now two of the Oracle servers froze with database problems. The DBA tells me that they have had block corrupts which he hasn’t seen in five years of running the things.”
Then he goes on to blog about what they learned from the Oracle freeze:
“…memory settings turned out to be highly critical in relation to the performance of the VM.”
…I guess he should have been paying more attention to his SearchServerVirtualization.com Virtualization Advisor e-newsletters. 😉