The Virtualization Room


September 30, 2008  10:05 AM

VDI planning primer on DHCP scope options

Rick Vanover Rick Vanover Profile: Rick Vanover

Fellow virtualization expert Andrew Kutz has argued that future virtual desktop infrastructure technologies (VDI) need to lose the desktop to truly advance VDI technology, and I agree. But until that time, we have to deal with VDI as it exists today. And that means accepting certain hurdles, which means accepting additional support requirements that today’s VDI poses. Let’s consider devices and their support requirements.The key to determining how virtual desktop infrastructure (VDI) devices interact with their connection broker is to identify the networking configuration. VDI devices use dynamic host control protocol (DHCP) scope options to get their configurations to the device that reflects where they go for the connection. Let’s dive into how the DHCP options are important to a VDI solution.

For starters, a DHCP scope option is a configuration that is defined on a networking server such as Windows Server’s DHCP server role. Traditional configurations for PCs and servers would have DHCP options such as subnet mask, default gateway and domain name server. VDI, however, allows the full range of DHCP scope options to be used. There are numerous scope options available for DHCP that are delivered to the requesting device in the acknowledgment message (DHCPACK), which is sent after the DHCP request message.

DHCP scope options vary by VDI device. Take for example the SunRay series of VDI devices. For VDI solutions in VMware implementations, the technology requires that at least DHCP options 49 and 66 are configured for connection to the Virtual Desktop Connector agent. Option 49 is for an X11 server window manager and 66 is a trivial file transfer protocol (TFTP) server for VDI device configuration files.

Beyond basic configuration, it may be worth tweaking some other network options based on the architecture of the VDI implementation. What has particularly caught my attention is a blog post by Sun’s Thin Client and Server Based Computing group, which points out that some environments may need to configure the maximum transmission unit (MTU) of network packets. This can also be assigned by DHCP and is of particular importance if the VDI implementation is to be a remote site with limited bandwidth. The default MTU of most configurations is around 1,500 bytes, yet performance may be better with a smaller number for maximum packet size from the endpoint VDI device. This and other factors make a fully representative pilot sound like a really good idea!

However, other platforms may use a new set of options to interact differently with the VDI device firmware. One example is the Pano Logic desktop device, which only requires the creation and configuration of option 001 as a vendor class. This is different than the example above in that there is no X11 window manager resident on the device.

While these DHCP configuration options are not overwhelming when viewed individually, it is worth considering the larger picture in the case of these options already in use. The most common example is an IP telephone at a remote site. While in central offices, IP telephony is usually split to a separate network, but this may not be the case for remote sites that have two or three VDI stations and the same number of phones. It may make sense to have only one IP network.

DHCP is critical to effective network management, including a VDI solution. Some planning on scope and configuration can go a long way to ensure that the technology will function as expected.

September 29, 2008  11:29 AM

ThinLaunch not all that impressive

Joseph Foran Profile: Joe Foran

At the New Innovators both at VMworld 2008 was an interesting small booth from ThinLaunch, which was manned by three of the four people in the company. I had a short pow-wow with two of the folks there and came away with mixed feelings. The product, for which the company is named, appears to fulfill a couple of interesting needs, the first being IT shops that want to pilot virtual desktop infrastructure (VDI) but don’t want to invest beyond the server room, and the second being smaller businesses that have server virtualization capacity to devote to hosting clients but have been loathe to rip and replace their thick clients with new thin hardware. I’m not too wowed by the product but I can see where it may be useful. That said, I was royally unimpressed with the technology.

ThinLaunch can be cobbled together with a few Group Policy object edits in Active Directory without buying the product. Simply replace the shell with whatever VDI launcher (or other application) you want. Microsoft tells you how to do it here. True, ThinLaunch then monitors this process if it crashes and can automatically restart it, but this is also something that can be managed with an application or by copying the code from this site.

ThinLaunch is available as an MSI package, meaning it’s very easy to deploy via Group Policy. Then again, Group Policies are even easier to deploy via group policy. Duh. ThinLaunch requires .NET 2.0. and GPOs don’t. ThinLaunch supports Windows 2000 through Vista and 2K8. GPOs do too.

I can see the need for this package and I can even see some large enterprise customers who’d want a packaged application to handle the conversion of legacy desktops. I can even see using the product in small businesses with virtualization already in place but a lot of legacy desktops and a lack of cash. What I can’t see is how it’s innovative in its approach.

Sorry, ThinLaunch, but you get three out of ten pokers — there’s just nothing hot there.


September 29, 2008  11:00 AM

Embotics V-Scout VM population trending is a hidden jewel

Rick Vanover Rick Vanover Profile: Rick Vanover

Virtualization administrators are quite aware of the risk of virtual machine sprawl. How your virtual environment will grow or shrink will vary based on many factors. For new implementations, there is a generally a large front-loading of virtual machines from conversions or migrations from older environments. The ongoing mode, however, may be a little more predictable. V-Scout from Embotics has a great feature buried in the reporting engine that can give you a view into the VMware guest virtual machine footprint in the population trending report.

The base report is fair enough, as it gets more accurate and reasonable with a good time sample to make predictions. I have just about one month’s worth of data in my V-Scout data base, so now some of the data makes sense. Here is the top of the base report:
VM Trend Report
There is a quick snapshot of the change in storage, number of virtual machines, operating system count, memory and CPU inventories. In this example, I added a host as well as virtual machines and storage, showing the comprehensive changes nicely for the timeframe.

The real treasure of this report, however, is the bottom option called “VM Details.” This is a nice log of virtual machines that were added and removed in the environment. While there may be a net gain of two virtual machines in the timeframe example above, this detail will show how we arrived at that count. This can help explain virtual machine additions and removals that may be part of a special task or project, and they can all be viewed in the bottom of the VM population trending report as shown below:
VM Details
V-Scout’s intuitive interface continues to bring good information closer to the virtualization administrator. V-Scout is a free management application that can plug into VMware Infrastructure 3 environments and can be downloaded from the Embotics website.


September 29, 2008  10:53 AM

Microsoft – Time to put up or shut up

Eric Siebert Eric Siebert Profile: Eric Siebert

Having seen a lot of anti-VMware propaganda coming out of the Microsoft marketing machine lately, it strikes me that Microsoft is desperate to do anything to try to catch up and compete with VMware. One example is the VMwareCostsWayTooMuch.com website, which it recently launched in conjunction with passing out $1 chips and flyers at VMworld. What’s next, Microsoft? Late-night TV infomercials on Hyper-V proclaiming its greatness? You might see if George Foreman is available — you could call it the lean, mean, cost-reducing virtualization machine.

Microsoft’s tactics strike me as childish. Instead of trying to mislead people, the company should spend its time and money making a product that can actually compete with VMware. Microsoft tries to push the cost issue without looking at the big picture numbers and the features you get with each product. VMware costs more because you get more with it; you get a proven, mature and feature-rich product with many integration, management and automation components.

Microsoft is way behind in the enterprise virtualization game and has a lot of catching up to do. VMware’s recent announcements at VMworld puts Microsoft even farther back in VMware’s rear-view mirror. Microsoft should be doing everything it can to polish its 1.0 product and add some of the many features and functionality that ESX already has. Good products tend to speak for themselves. Once Microsoft has a product that can stand up to ESX, it won’t be forced to sink to the guerilla marketing level to sell its product. I guess at this point Microsoft has to do everything it can to try and achieve global domination of the virtualization market. Maybe it’s time for VMware to start its own website, along the lines of HyperVLacksFeatures.com — but then again, why sink to Microsoft’s level?


September 29, 2008  10:44 AM

EsXpress: A good idea come ’round again

Joseph Foran Profile: Joe Foran

I had the opportunity to spend a little time at the esXPress booth at VMworld 2008 this year, and I could kick myself. Hard.

To go to the start of why … a long time ago, back when my office primarily used VMware GSX3 for virtulization at the server level, I had a real need to do backups of the virtual machine disk files (VMDK). My GSX hosts were Linux servers and I used a simple cron job to launch scripts on a schedule, which triggered a suspension, tarring of the VMs and scp-ing of the tarballs to a network-attached storage (NAS) box before re-starting the guests. It let me avoid buying backup licenses for my guests (which were mostly pre-production units, image builds, etc.) and gave me a complete point-in-time recovery solution better than anything I could buy off the shelf (at the time). It ws so efficient that when my company joined the Core Customer Program, I was asked to give a webinar on the topic. Sadly, that webinar is now so out-of-date that it’s been pulled from VMware’s site and I can’t find it on archive.org.

Now why would I kick myself? Because that simple idea is at the root of esXpress. It does it a lot better than I did and focuses on ESX rather than GSX/Server, but at the core it’s very similar. It gets around the need for downtime and uses gzip under the hood rather than tar, but it has a Linux OS guest that essentially copies, compresses and offloads other guests. I was pretty impressed by how simply and efficiently the product works, though I must admit to being bit jealous — if only I had realized there was a product there in that idea.

So kudos to esXpress for taking a good idea and making a good product out of it!


September 24, 2008  2:56 PM

Microsoft’s Hyper-V bundling with Windows Server 2008: Deja vu?

Bridget Botelho Profile: Bridget Botelho

You may recall the 1998 anti-trust case in which Microsoft Corp. was sued by the U.S. for bundling Internet Explorer with its Windows operating system, because doing so gave Microsoft an advantage that led to the demise of the incumbent Web browser, Netscape Navigator.

Fast-forward 10 years to today. Microsoft bundles its Hyper-V hypervisor with Windows Server 2008, so anyone who uses Windows Server 2008 also gets Hyper-V.

Is this déjà vu?

Just for fun, let’s play a little game. In the Wikipedia definition of United States v. Microsoft, let’s replace “Web browser” with “hypervisor,” “Netscape” with “VMware,” and “Internet Explorer” with “Hyper-V.”

Here we go:

United States v. Microsoft 87 F. Supp. 2d 30 (D.D.C. 2000) was a set of consolidated civil actions filed against Microsoft Corp. on May 18, 1998 by the United States Department of Justice (DOJ) and twenty U.S. states. Joel I. Klein was the lead prosecutor. The plaintiffs alleged that Microsoft abused monopoly power in its handling of operating system sales and hypervisor sales. The issue central to the case was whether Microsoft was allowed to bundle its flagship Hyper-V hypervisor software with its Microsoft Windows operating system.

Bundling them together is alleged to have been responsible for Microsoft’s victory in the hypervisor wars as every Windows user had a copy of Hyper-V. It was further alleged that this unfairly restricted the market for competing hypervisors (such as VMware Inc.) that were slow to download over a modem or had to be purchased at a store.

Underlying these disputes were questions over whether Microsoft altered or manipulated its application programming interfaces (APIs) to favor Hyper-V over third-party hypervisors, Microsoft’s conduct in forming restrictive licensing agreements with OEM computer manufacturers, and Microsoft’s intent in its course of conduct.

Microsoft stated that the merging of Microsoft Windows and Hyper-V was the result of innovation and competition, that the two were now the same product and were inextricably linked together and that consumers were now getting all the benefits of IE for free. Those who opposed Microsoft’s position countered that the hypervisor was still a distinct and separate product which did not need to be tied to the operating system, since a separate version of Hyper-V was available for Mac OS. They also asserted that IE was not really free because its development and marketing costs may have kept the price of Windows higher than it might otherwise have been. The case was tried before U.S. District Court Judge Thomas Penfield Jackson. The DOJ was initially represented by David Boies.

That was fun. Just a game, of course.

But it appears to me that by bundling Hyper-V with Windows Server 2008, Microsoft has the same advantage over VMware, Citrix and any other hypervisor provider that it had over Netscape Navigator in the 1990s.

This time around, Microsoft is probably safe from a lawsuit, though.

According to Wikipedia, “On November 2, 2001, the DOJ reached an agreement with Microsoft to settle the case. The proposed settlement required Microsoft to share its application programming interfaces with third-party companies. … However, the DOJ did not require Microsoft to change any of its code nor prevent Microsoft from tying other software with Windows in the future. … Nine states and the District of Columbia did not agree with the settlement, arguing that it did not go far enough to curb Microsoft’s anti-competitive business practices. On June 30, 2004, the U.S. appeals court unanimously approved the settlement with the Justice Department, rejecting objections from Massachusetts that the sanctions were inadequate.”

I’m not saying that what happened to Netscape will happen to VMware. In the virtualization market, VMware is in a strong position with a huge lead over Microsoft, and VMware’s products are far more mature and feature-rich. Hell, Microsoft doesn’t even offer live migration yet.
VMware Potential Tombstone

But some analysts predict that history will repeat itself.

I hope all the existing hypervisor vendors can co-exist. It gives users choice and keeps costs down, as we saw when both Citrix and VMware reduced their hypervisor prices to zero this year.

I’d like to hear your feedback on the virtualization industry and whether you think VMware can maintain its position as industry leader.


September 24, 2008  8:28 AM

VMware defends its upcoming fault-tolerance feature

Bridget Botelho Profile: Bridget Botelho

During VMworld 2008 in Las Vegas last week, VMware Inc. announced its upcoming fault tolerance feature and gave a demonstration of it during one of the keynote sessions. It looked pretty good and simple to use, but Littleton, Mass.-based Marathon Technologies Corp., a company that specializes in fault tolerance software, had plenty to say otherwise.

In response to Marathon’s blog dissin’ the upcoming feature, Palo Alto, Calif.-Fencing - UFCbased VMware‘s Mike DePetrillo, a principal systems engineer, wrote a blog defending VMware Fault Tolerance.

For starters, Marathon complained that VMware does not provide component-level fault tolerance. “The most common failures that result in unplanned downtime are component failures such as storage, NIC [network interface card] or controller failures. Yet VMware Fault Tolerance doesn’t do anything to protect against I/O, storage or network failures.”

DePetrillo noted that VMware already has features to protect again component failure. “If your NIC fails you’ve got NIC teaming built into the system. To set it up simply plug in both NICs to the server, go into the network panel and attach both of them to the same virtual switch. Done. Four clicks. Same thing for storage with the built-in SAN [storage area network] multipathing drivers,” DePetrillo wrote. “I absolutely agree with the author that component failures are the cause of most crashes and that’s why VMware added these features in 2002. VMware FT is not designed for component failure because there’s no sense in moving the VM to another host if you’ve simply lost a NIC uplink. NIC teaming will take care of that with ease and is a LOT cheaper than using CPU and memory resources on another host to overcome the failure.”

Marathon’s second beef: VMware’s fault tolerance is too complex. “In order to use VMware Fault Tolerance, you’ll first have to install both VMware HA [High Availability] and DRS [Distributed Resource Scheduler]. No small feat in and of themselves. Then, because VMware FT requires NIC teaming, you’ll also have to manually install paired NICs. Then you’ll need to manually set up dual storage controllers (with the software to manage them) because it requires multipathing. And to top it all off, you’re required to use an expensive, and often complicated, SAN.”

DePetrillo said the process requires checking off two boxes – HA and DRS. That’s it. “If that’s too hard then please comment and let me know how it could possibly be easier. Even my dog has figured out how to do this now. Granted, it’s a pretty smart dog.”

“As for setting up the dual NICs and dual HBAs [host bus adapters], well, yes, you have to actually plug the physical devices in. After you’ve done that the **built-in** NIC teaming and HBA drivers will take over and configure most everything for you. The NIC teaming does require four extra clicks. The HBA drivers actually figure out the failover paths, match them up, and set up the appropriate form of failover all auto-magically. They’ve been doing this since ESX 1.5 (6 years ago),” DePetrillo blogged.

“Lastly, yes, this requires shared storage. Pretty sure that most environments that want FT (no downtime what-so-ever because our business could lose millions) already have a SAN to take advantage of other things virtualization related such as DRS and VMotion,” he wrote.

Also, VMware FT does not require dual NICs or dual HBAs because, DePetrillo said, “This is something you should have in every virtualization setup that’s running VMs you care anything about, but it’s not a requirement to get VMware FT [Fault Tolerance] running.”

The last point Marathon makes that’s worth spending any time on is that VMware  offers onlylimited CPU fault tolerance. “With VMware FT, you’ll need to set up what VMware refers to as a “record/replay” capability on both a primary and secondary server. If something happens to the primary server, the record is stored on the SAN and then restarted on the secondary server. … The whole thing depends on the quality of the SAN. Second, in the words of the VMware engineer who presented at VMworld, “this can take a couple of seconds.” So what happens to your application state in those couple of seconds?”

DePetrillo’s defense is that “if you’re the type of company that requires absolutely no downtime for an app — if the app is just that critical — then I’m pretty sure you’re going to have a decent SAN. … If you’re having so many problems with your SAN that you don’t trust it for FT, then you have much bigger issues at hand that VMware or Marathon or any of the other virtualization related vendors aren’t going to help you with.”

You can read more of VMware’s comments on DePetrillo’s blog, which gets into some details on how VMware Fault Tolerance will work, and vice versa for Marathon.

But I think it is obvious that Marathon is making VMware’s fault tolerance feature seem worse than it is, and VMware is making its new feature seem simpler than it is.

For the most part, this is a pissing contest between the incumbent fault-tolerance vendor and the “new guy,” but the fact of the matter is, if you use VMware virtualization, you can’t use Marathon Technologies because they don’t support VMware (obviously) and if you use Citrix Systems’ XenServer, you can’t use VMware Fault Tolerance, so these arguments are moot.


September 23, 2008  5:00 AM

Server virtualization in the age of mergers and acquisitions

Joseph Foran Profile: Joe Foran

Last week at VMworld ’08, while living in the glitz of Vegas for a week of product news, press releases, interviews and judging the Best of VMworld entires with my TechTarget colleagues, my constantly buzzing BlackBerry delivered the latest financial news — the collapse of Lehman, the fall of Morgan, the implosion of AIG — all saying the doom of the market is upon thee.

As an investor, this wasn’t my happiest week (I always felt it was odd to invest money in people who invest money), but for a lot of others, last week must have been miserable indeed. Among those who are feeling miserable right now are IT staffers at Bank of America, who must now acquire a global IT infrastructure as their company acquires Morgan Stanley. And of course, federal IT staff are now worrying about how to oversee the essentially nationalized AIG. That’s not to mention the IT teams at numerous other companies engaged in mergers and acquisitions.

This is the time for server virtualization to shine. Bank of America should lead the charge in making efficient use of virtualization in their acquisition of Morgan Stanley. BofA is going to inherit an immense quantity of hardware, not to mention enormous heating/cooling/electric bills, colossal real estate costs and a titanic regulatory compliance project as it tries to integrate its own IT infrastructure with Morgan’s. If BofA (or any acquiring company for that matter) is smart, it will use virtualization to physical-to-virtual (P2V) every possible asset, transport to its own data center and import those virtual systems.

Bank of America shouldn’t just P2V low-hanging fruit, either — it should reach for the stars. Then it should shut down that physical hardware, wipe it and sell it to help offset the project costs. There are obviously a lot of nuanced steps involved in making this happen, but all the major pain points to which virtualization presents solutions are all the major pain points in integrating a new IT infrastructure:

1) Server move/change/add/remove
2) Power costs
3) Real estate costs
4) Heating and cooling
5) Configuration management
6) Asset management

The difference between the slow-rolling projects in most companies and the aggressive plan I recommend is night and day. The ROI in a progressive rollout can be achieved over time, integrated into the budget and then applied over that time. The costs of an acquisition and the integration of that acquisition’s IT assets are immediate and immense.

Virtualization can provide those long-term benefits in the short term — the elimination of real estate, cooling and power costs alone will offset the cost of licensing and storage. The enhanced backup and retention possible with virtualized systems will go a long way towards easing regulatory concerns of data retention.


September 23, 2008  4:10 AM

Using blades as virtual hosts

Eric Siebert Eric Siebert Profile: Eric Siebert

Blades have come a long way since the early days of very few options and limited expandability. Most early blade servers only had one or two NICs, limited storage, no Fibre Channel support, and limited CPU and memory, which made them poor choices for virtual hosts. That’s all changed in recent years as blade technology has evolved and no longer has the limitations of earlier blades, making them ideal for virtual host servers. Modern blade servers can support up to 16 NICs, four quad-core processors and multiple Fibre Channel or iSCSI HBA adapters. When considering blade servers in your environment as an alternative to traditional rack mount servers, you need to know the advantages and disadvantages of each and why you might choose one type over another.

Some reasons you might choose blade servers over traditional servers:

  • Rack density is better for data centers where space is a concern. Up to 50% more servers can be installed in a standard 42U rack compared with traditional servers.
  • Blade servers provide easier cable management as they simply connect to a chassis and need no additional cable connections.
  • Blade servers have lower power consumption than traditional servers because of reduced power and cooling requirements.
  • Blade servers can be cheaper than traditional servers when comparing a fully populated chassis with the equivalent number of traditional servers.

Some reasons you might choose traditional servers over blade servers:

  • Traditional servers have more internal capacity for local disk storage. Blade servers typically have limited local disk storage capacity due to the limited drive bays. Some blade vendors now have separate storage blades to expand blade storage, but this takes up additional slots in the blade chassis.
  • Traditional servers have more expansion slots available for network and storage adapters. Blade servers typically have very few or no expansion slots. Virtual hosts are often configured with many NICs to support the console network, vmKernel network, network-attached storage and virtual machine networks. Additional network adapters are also needed to provide failover and load balancing.
  • Once a chassis is full, purchasing a new chassis to add a single new additional server can be costly. Traditional servers can be installed without any additional infrastructure components.
  • Traditional servers are often less complicated to set up and manage than blade servers.
  • Traditional servers have multiple USB ports for connecting external devices and also an optical drive for loading software on the host. They also have serial and parallel ports, which are sometimes used for hardware dongles for licensing software. Additionally, tape backup devices can be installed in them. Blade servers make use of virtual devices that are managed through the embedded hardware management interfaces.

Many people that use blade servers as virtual hosts often take advantage of the boot-from-SAN feature so they don’t need internal storage on their blade servers. The choice between blade and traditional servers often comes down to personal preference and what type of server is already in use in your data center. Some people like blades, others don’t. Regardless of which server type you choose, they both work equally well as virtual hosts.


September 22, 2008  12:49 PM

Managing the virtual machine lifecycle with an expiration date

Rick Vanover Rick Vanover Profile: Rick Vanover

Having a virtual machine expiration date is an important procedural step in successfully managing a virtual environment. Managing the expiration date in VMware environments is challenging without adding pricey tools such Lifecycle Manager or other commercial management products. In this video blog, Rick Vanover discusses one way to get the expiration date into your virtual environment without adding direct costs:


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: