Last week, VMware released VirtualCenter 2.5 Update 3. The release fixes issues with Update 2 (build 104263), which was released in July, before the infamous product expiration issue. In the immediate releases after the problem, a corrected version of Virtual Center was not released.
This update fixes 19 various issues but does not provide any new functionality. Several of the resolved issues involve VMware High Availability and Distributed Resource Scheduler functionality — in the previous version, various situations prevented the functionality from working correctly.
VMware Converter Enterprise, now on build 62407, also has an update that will correct two minor issues with connecting to a VirtualCenter server for conversions. This is different than the VMware Converter Enterprise standalone edition, which installs on remote systems instead of the VirtualCenter server. The standalone version remains at version 3.0.3 (build 89816).
More information on VirtualCenter 2.5 Update 3 can be found in the release notes on VMware’s website.
OK, I had to laugh at this one. A Microsoft blog references an article in which a company that predominantly runs Microsoft applications said that it received a $50,000 quote from VMware to virtualize 16 physical servers to four virtual hosts. It claims that the cost comprised $25,000 in software costs and $25,000 in installation costs. The article also said the company chose Hyper-V instead because it cost only $49 per server. The article didn’t mention anything about hardware costs so presumably the company already had hardware or planned to purchase it separately.
The $50,000 price tag was obviously very high. Most likely the quote was for at least one Enterprise license per server as well as VirtualCenter, which may have come out to $25,000 or so. The company claimed to only have a 10-15% CPU utilization rate on its current servers, so it could have easily gone with only two ESX hosts. However, it is possible that they needed four hosts for Hyper-V.
I have to wonder if the company realized what it would get for $25,000. The VMware option provides very robust and feature-rich Enterprise licenses along with a VirtualCenter management server. Comparing this to Hyper-V is like comparing apples to carrots: They aren’t even in the same family. I also wonder if it thought that all it would need to fork out was $49 per server and thus the whole project was going to cost $196 compared with $50,000. Apparently nobody informed it of the underlying requirement of a Windows Server 2008 license for each Hyper-V server. If the company were instead looking at the recently announced Hyper-V Server 2008, which is free, it missed the fact that ESXi is also free and would not have included licensing costs.
As far as $25,000 in installation costs, that seems extremely high for setting up four ESX hosts and performing physical-to-virtual conversions of existing servers to virtual machines. Without seeing the details of the quote it’s hard to say what the company would have paid for. It apparently had no virtualization experience whatsoever, because if it had, it wouldn’t pay someone to install and configure its servers. Presumably it would still have to pay someone to virtualize its environment on to Hyper-V servers. Unfortunately the article made no mention of those costs.
I have to give the company the benefit of the doubt. Was it merely a victim of someone trying to sell it way more than it needed or did the person who provided the quote not understand the company’s needs? It could have easily gone with ESXi servers for free and paid a reasonable amount to have someone help with the installation. If it wanted more features it could have also gone with one of the ESX Foundation Acceleration kits bundled with VirtualCenter for only $3,600. It’s a shame that the company was quoted such a high price. I know if I saw a price tag like that to virtualize a small environment I would balk at it too. However, while looking at other alternatives I would also ask why the quote was so high and try to understand exactly what the cost entailed. It sounds as if someone were trying to sell the company a bunch of Ferraris when all it really needed was a couple of mini-vans.
So without all the facts all we can do is guess, but this seems to be just another case of comparing apples to carrots in an attempt to exploit the so-called price issue between ESX and Hyper-V that doesn’t exist if you do a fair comparison between the two.
It’s been covered to death, but something about Diane Greene’s ousting from VMware’s top spot still doesn’t sit right with me. Not the ousting itself but the chatter about why. There have been conversations about why she was let go, ranging from EMC’s CEO Joe Tucci wanting greater control of VMware to questions about whether she was more of a technology person and less of a business person. In the end, the appointment of Paul Maritz is the really big news, at least in my not-so-humble opinion.
It goes back to “it’s not what but who you know,” and Maritz knows Microsoft. He knows Ballmer, Gates and every other player there. He was one of the most influential and instrumental executives in Microsoft’s history. His reach is wide when it comes to pulling people into the fold — not necessarily by bringing ex-Microsoft folks in as employees, but rather by having high-level working relationships with all the partners that Microsoft has worked with and that EMC and VMware have worked with or would love to work with. He also knows the PC Revolution firsthand, having seen the rise and fall of Novell’s NetWare, Banyan’s VINES and the host of minis and mains that these replaced, only to be replaced by Windows a few years later.
Tucci also knows Microsoft — EMC’s storage products center around the Microsoft world as much as any other operating system. Exchange data stores, SQL databases, file shares — all of these are EMC’s bread and butter in selling storage to the modern data center. Its software, even though some products compete (like Documentum versus Sharepoint PS), is built around a Windows-centric world.
Then there’s the history — Microsoft knows how to win. It buys what it can’t make on its own, then drowns the competition in price wars and advertising battles. Novell, once Microsoft’s bitter rival for network OS sales, now sells Linux licenses to Microsoft. Netscape is gone and the ghost of its second cousin twice removed, the Mozilla Foundation’s Firefox, lives on to take what is really an insignificant chunk of Internet Explorer’s market share. Corel/Novell WordPerfect? Only if you’re working in a huge law firm will you see WP on an enterprise level.
Put these together and the fabled VMware versus Microsoft Hypervisor war starts to look less like an armed conflict between bitter rivals and more like a strategic partnership built through a demonstration of independence. Tucci’s no fool — Maritz is there for the day that the Redmond giant comes knocking. He’s there to build thin but sturdy roads between the two companies. He’s there to forge something like the Citrix/Microsoft alliance, where Citrix is an independent company but still acts in many ways like a subsidiary of Microsoft (or at least an extension). In Martiz’s VMworld keynote speech (not the parts about having “sins to atone for” in his early days of programming for the PC Revolution), he barely mentioned Greene and hardly touched on competition with Microsoft. He’s looking forward to the day when he can do what only Citrix has managed to do so far — preserve independence while under Redmond’s all-seeing eye.
In the end, we’ll see VMware’s VDC-OS as the dominant force in the virtualization space with Hyper-V as an acceptable but lesser alternative, much like Citrix’s MetaFrame/XenApp and Microsoft’s Terminal Services. I think this leaves one question: In the long run, what happens to Citrix now that it’s betting so heavily on Xen and taking on Microsoft and VMware directly in the systems virtualization market?
As I sat in my cozy office, drinking from a VMware mug, wearing a SearchVMware.com t-shirt under my dress shirt, saving drafts of a SharePoint training presentation to a 1GB USB stick emblazoned with eG’s logo and watching Jan and Hannah go through their big bag-o-stuff from the conference, I mulled over something … what was the one thing, above all of the other schwag, that I wound up using most? The answer was the lowest-tech item there: Sun’s little black book.
Yup, just a small black notepad. I’ve already filled up ten pages of notes in just around two weeks, and I now carry it with me to all my meetings. I look less rude taking notes on paper than entering them into my Blackberry (the message most people get when they see that: “Is this person note-taking or is he texting?” You tell me!). It fits in whatever bag I carry, whether it’s a notebook case, organizer or nothing at all. It’s better than a USB drive due to the simplicity of “open and write” versus “boot and type.”
So, the Completely Unofficial Best of VMworld Schwag Award (TM, patent-pending, Copyright 2008, all rights reserved) goes to Sun Microsystems for providing such an elegant and simple tool.
While VMworld 2008 left some attendees and contributors to SearchServerVirtualization.com hungry for more from VMware and its partners, others may have a different take on the conference. In this video blog, Rick Vanover offers his opinion on what VMworld offers to attendees and if it is worthwhile to attend.
Recently I had the opportunity to go through a series of tests revolving around the use of a raw disk or dedicated logical unit number (LUN) to a virtual machine. I think that any virtualization administrator should go through the drill. The basic principle is to seamlessly move storage on a storage area network (SAN) from a physical system to a virtual machine. This can accommodate an emergency physical-to-virtual (P2V) conversion of a system with large storage, saving costly time on a system conversion.
The experience I reference was a VMware ESX Server and VirtualCenter configuration with Windows Server 2003 systems. With a LUN made available to a physical system, moving that storage over to a virtual machine was actually quite easy and uneventful. There are a few pointers to remember when doing this configuration, namely because the storage is not residing on a virtual machine file system (VMFS) LUN, it will not be eligible for a lot of VMware’s management and high-availability (HA) functionality. This includes Storage VMotion, host-to-host VMotion and HA features for failover onto another host automatically.
Taking the time to become familiar with the process and expected functionality is a worthwhile investment. This is a configuration that may not be something that you foresee using, but it may come in handy. The figure below shows a virtual machine configured with two LUNs made available to the host and assigned to a virtual machine:
One important step is not to add storage as you normally would for a VMFS volume. This will destroy the contents of the drive and make your preserved data unreadable. In the configuration shown below, the drive arrives seamlessly to the guest operating system and will be ready for use. There may be a drive letter change, but that is an easy correction.
Most storage systems and virtualization platforms should be able to support this functionality in some capacity. Taking the time to get the specifics down in your storage environment can avoid any surprises if this configuration is required due to an unplanned migration
EG Innovations took a Best of VMworld award for the application and infrastructure management category, and as one of the judges, it’s my pleasure to tell you why … eG gets it, and it gets IT. The “it” the company gets is business. There were a lot of entries in the category, ranging from desktop virtualization management, cloud computing management and traditional system/network management. EG stood out because its product took a real user problem (in the demo, customers who had problems depositing money via a bank website) all the way through the final root cause analysis, and did so in a clear, consistent fashion that was very easy to trace back to the relative obscurity of a Samba process gone haywire on a file server. The company’s service-level agreement (SLA) awareness was elegant, particularly in that a failure to meet an SLA was a source of system alerts. Its mix of agentless and lightweight agents and its ability to manage system alerts in real time was great, akin to many of the others in the category. In the end, the business awareness put eG over the top.
It was deep level, allowing IT staff to react quickly with appropriate (and relevant) technical information at their disposal to solve problems or initiate handoffs between departments if needed. The business view allows IT to conceptualize the impact of a problem or SLA failure, and thus better align itself with the business. The wide array of hosts, services and vendors supported by the product grants a big boon — having one tool to rule them all (LotR jokes are prohibited, thank you). It’s a tool that a seasoned sysadmin and an entrenched CIO can both love, and better yet, both use.
So … on my trademarked poker scale: EG gets a solid nine pokers. It’s hot, like a fireplace poker, and if you get jabbed by it, you will certainly know it!
I’ll keep it short: It was a great conference, but mostly for the networking and meetings. I’ll take the negative nelly role here and say outright that when it came to the products, I wasn’t too impressed, wasn’t too wowed and wasn’t too giddy. I’ve seen a lot of great announcements, heard a lot of great talk and definitely met a lot of great people, but I haven’t seen much else that I’m really going “Wow!” over. The Cisco announcement has been a year in the coming. We heard about it at last year’s conference, and it’s still not fully released. ESX 4 … wasn’t that demo-ed last year? VDC-OS? Show me a product. I’ve sat through press briefings, product announcement, labs and seminar after seminar, and I keep coming back to those Wendy’s commercials from the ’80s … Where’s the Beef?
NEC’s got me piqued. It seems ready to re-enter the American market in a big way, reversing its trend of avoiding the U.S. as a full-systems seller like we all had monkey pox. The company also seems to have the best end-to-end VDI solution out there, extending VMware’s product on its own hardware, with multimedia and USB capabilities.
Cisco’s got me curious. I’m hoping for a product launch soon so I can see the inside of this new plug-in networking module architecture. I’m not holding my breath, however, because this has been in the pipe for a long time without much substance. True, the VMware virtual switches we all know and love today were originally co-designed with Cisco, but that just makes me wonder why there hasn’t been a formal product on the market before now.
On to the cloud. As I told the incomparable John Troyer in the podcast Andrew Kutz and I did … if your product is vapor, don’t call it cloud. Show me a Web OS client that can run virtualized apps. Show me federation over the Web with cloud services that integrate with internal services. Show me something!
The glitz was top notch. The glam was top notch. The parties — I think you see where this is going. Long story short, this conference was one about maintenance mode rather than being unveiling anything major, but it sure was fun.
Fellow virtualization expert Andrew Kutz has argued that future virtual desktop infrastructure technologies (VDI) need to lose the desktop to truly advance VDI technology, and I agree. But until that time, we have to deal with VDI as it exists today. And that means accepting certain hurdles, which means accepting additional support requirements that today’s VDI poses. Let’s consider devices and their support requirements.The key to determining how virtual desktop infrastructure (VDI) devices interact with their connection broker is to identify the networking configuration. VDI devices use dynamic host control protocol (DHCP) scope options to get their configurations to the device that reflects where they go for the connection. Let’s dive into how the DHCP options are important to a VDI solution.
For starters, a DHCP scope option is a configuration that is defined on a networking server such as Windows Server’s DHCP server role. Traditional configurations for PCs and servers would have DHCP options such as subnet mask, default gateway and domain name server. VDI, however, allows the full range of DHCP scope options to be used. There are numerous scope options available for DHCP that are delivered to the requesting device in the acknowledgment message (DHCPACK), which is sent after the DHCP request message.
DHCP scope options vary by VDI device. Take for example the SunRay series of VDI devices. For VDI solutions in VMware implementations, the technology requires that at least DHCP options 49 and 66 are configured for connection to the Virtual Desktop Connector agent. Option 49 is for an X11 server window manager and 66 is a trivial file transfer protocol (TFTP) server for VDI device configuration files.
Beyond basic configuration, it may be worth tweaking some other network options based on the architecture of the VDI implementation. What has particularly caught my attention is a blog post by Sun’s Thin Client and Server Based Computing group, which points out that some environments may need to configure the maximum transmission unit (MTU) of network packets. This can also be assigned by DHCP and is of particular importance if the VDI implementation is to be a remote site with limited bandwidth. The default MTU of most configurations is around 1,500 bytes, yet performance may be better with a smaller number for maximum packet size from the endpoint VDI device. This and other factors make a fully representative pilot sound like a really good idea!
However, other platforms may use a new set of options to interact differently with the VDI device firmware. One example is the Pano Logic desktop device, which only requires the creation and configuration of option 001 as a vendor class. This is different than the example above in that there is no X11 window manager resident on the device.
While these DHCP configuration options are not overwhelming when viewed individually, it is worth considering the larger picture in the case of these options already in use. The most common example is an IP telephone at a remote site. While in central offices, IP telephony is usually split to a separate network, but this may not be the case for remote sites that have two or three VDI stations and the same number of phones. It may make sense to have only one IP network.
DHCP is critical to effective network management, including a VDI solution. Some planning on scope and configuration can go a long way to ensure that the technology will function as expected.
At the New Innovators both at VMworld 2008 was an interesting small booth from ThinLaunch, which was manned by three of the four people in the company. I had a short pow-wow with two of the folks there and came away with mixed feelings. The product, for which the company is named, appears to fulfill a couple of interesting needs, the first being IT shops that want to pilot virtual desktop infrastructure (VDI) but don’t want to invest beyond the server room, and the second being smaller businesses that have server virtualization capacity to devote to hosting clients but have been loathe to rip and replace their thick clients with new thin hardware. I’m not too wowed by the product but I can see where it may be useful. That said, I was royally unimpressed with the technology.
ThinLaunch can be cobbled together with a few Group Policy object edits in Active Directory without buying the product. Simply replace the shell with whatever VDI launcher (or other application) you want. Microsoft tells you how to do it here. True, ThinLaunch then monitors this process if it crashes and can automatically restart it, but this is also something that can be managed with an application or by copying the code from this site.
ThinLaunch is available as an MSI package, meaning it’s very easy to deploy via Group Policy. Then again, Group Policies are even easier to deploy via group policy. Duh. ThinLaunch requires .NET 2.0. and GPOs don’t. ThinLaunch supports Windows 2000 through Vista and 2K8. GPOs do too.
I can see the need for this package and I can even see some large enterprise customers who’d want a packaged application to handle the conversion of legacy desktops. I can even see using the product in small businesses with virtualization already in place but a lot of legacy desktops and a lack of cash. What I can’t see is how it’s innovative in its approach.
Sorry, ThinLaunch, but you get three out of ten pokers — there’s just nothing hot there.