For starters, a DHCP scope option is a configuration that is defined on a networking server such as Windows Server’s DHCP server role. Traditional configurations for PCs and servers would have DHCP options such as subnet mask, default gateway and domain name server. VDI, however, allows the full range of DHCP scope options to be used. There are numerous scope options available for DHCP that are delivered to the requesting device in the acknowledgment message (DHCPACK), which is sent after the DHCP request message.
DHCP scope options vary by VDI device. Take for example the SunRay series of VDI devices. For VDI solutions in VMware implementations, the technology requires that at least DHCP options 49 and 66 are configured for connection to the Virtual Desktop Connector agent. Option 49 is for an X11 server window manager and 66 is a trivial file transfer protocol (TFTP) server for VDI device configuration files.
Beyond basic configuration, it may be worth tweaking some other network options based on the architecture of the VDI implementation. What has particularly caught my attention is a blog post by Sun’s Thin Client and Server Based Computing group, which points out that some environments may need to configure the maximum transmission unit (MTU) of network packets. This can also be assigned by DHCP and is of particular importance if the VDI implementation is to be a remote site with limited bandwidth. The default MTU of most configurations is around 1,500 bytes, yet performance may be better with a smaller number for maximum packet size from the endpoint VDI device. This and other factors make a fully representative pilot sound like a really good idea!
However, other platforms may use a new set of options to interact differently with the VDI device firmware. One example is the Pano Logic desktop device, which only requires the creation and configuration of option 001 as a vendor class. This is different than the example above in that there is no X11 window manager resident on the device.
While these DHCP configuration options are not overwhelming when viewed individually, it is worth considering the larger picture in the case of these options already in use. The most common example is an IP telephone at a remote site. While in central offices, IP telephony is usually split to a separate network, but this may not be the case for remote sites that have two or three VDI stations and the same number of phones. It may make sense to have only one IP network.
DHCP is critical to effective network management, including a VDI solution. Some planning on scope and configuration can go a long way to ensure that the technology will function as expected.]]>
ThinLaunch can be cobbled together with a few Group Policy object edits in Active Directory without buying the product. Simply replace the shell with whatever VDI launcher (or other application) you want. Microsoft tells you how to do it here. True, ThinLaunch then monitors this process if it crashes and can automatically restart it, but this is also something that can be managed with an application or by copying the code from this site.
ThinLaunch is available as an MSI package, meaning it’s very easy to deploy via Group Policy. Then again, Group Policies are even easier to deploy via group policy. Duh. ThinLaunch requires .NET 2.0. and GPOs don’t. ThinLaunch supports Windows 2000 through Vista and 2K8. GPOs do too.
I can see the need for this package and I can even see some large enterprise customers who’d want a packaged application to handle the conversion of legacy desktops. I can even see using the product in small businesses with virtualization already in place but a lot of legacy desktops and a lack of cash. What I can’t see is how it’s innovative in its approach.
Sorry, ThinLaunch, but you get three out of ten pokers — there’s just nothing hot there.]]>
The base report is fair enough, as it gets more accurate and reasonable with a good time sample to make predictions. I have just about one month’s worth of data in my V-Scout data base, so now some of the data makes sense. Here is the top of the base report:
There is a quick snapshot of the change in storage, number of virtual machines, operating system count, memory and CPU inventories. In this example, I added a host as well as virtual machines and storage, showing the comprehensive changes nicely for the timeframe.
The real treasure of this report, however, is the bottom option called “VM Details.” This is a nice log of virtual machines that were added and removed in the environment. While there may be a net gain of two virtual machines in the timeframe example above, this detail will show how we arrived at that count. This can help explain virtual machine additions and removals that may be part of a special task or project, and they can all be viewed in the bottom of the VM population trending report as shown below:
V-Scout’s intuitive interface continues to bring good information closer to the virtualization administrator. V-Scout is a free management application that can plug into VMware Infrastructure 3 environments and can be downloaded from the Embotics website.
Having seen a lot of anti-VMware propaganda coming out of the Microsoft marketing machine lately, it strikes me that Microsoft is desperate to do anything to try to catch up and compete with VMware. One example is the VMwareCostsWayTooMuch.com website, which it recently launched in conjunction with passing out $1 chips and flyers at VMworld. What’s next, Microsoft? Late-night TV infomercials on Hyper-V proclaiming its greatness? You might see if George Foreman is available — you could call it the lean, mean, cost-reducing virtualization machine.
Microsoft’s tactics strike me as childish. Instead of trying to mislead people, the company should spend its time and money making a product that can actually compete with VMware. Microsoft tries to push the cost issue without looking at the big picture numbers and the features you get with each product. VMware costs more because you get more with it; you get a proven, mature and feature-rich product with many integration, management and automation components.
Microsoft is way behind in the enterprise virtualization game and has a lot of catching up to do. VMware’s recent announcements at VMworld puts Microsoft even farther back in VMware’s rear-view mirror. Microsoft should be doing everything it can to polish its 1.0 product and add some of the many features and functionality that ESX already has. Good products tend to speak for themselves. Once Microsoft has a product that can stand up to ESX, it won’t be forced to sink to the guerilla marketing level to sell its product. I guess at this point Microsoft has to do everything it can to try and achieve global domination of the virtualization market. Maybe it’s time for VMware to start its own website, along the lines of HyperVLacksFeatures.com — but then again, why sink to Microsoft’s level?]]>
To go to the start of why … a long time ago, back when my office primarily used VMware GSX3 for virtulization at the server level, I had a real need to do backups of the virtual machine disk files (VMDK). My GSX hosts were Linux servers and I used a simple cron job to launch scripts on a schedule, which triggered a suspension, tarring of the VMs and scp-ing of the tarballs to a network-attached storage (NAS) box before re-starting the guests. It let me avoid buying backup licenses for my guests (which were mostly pre-production units, image builds, etc.) and gave me a complete point-in-time recovery solution better than anything I could buy off the shelf (at the time). It ws so efficient that when my company joined the Core Customer Program, I was asked to give a webinar on the topic. Sadly, that webinar is now so out-of-date that it’s been pulled from VMware’s site and I can’t find it on archive.org.
Now why would I kick myself? Because that simple idea is at the root of esXpress. It does it a lot better than I did and focuses on ESX rather than GSX/Server, but at the core it’s very similar. It gets around the need for downtime and uses gzip under the hood rather than tar, but it has a Linux OS guest that essentially copies, compresses and offloads other guests. I was pretty impressed by how simply and efficiently the product works, though I must admit to being bit jealous — if only I had realized there was a <i>product</i> there in that idea.
So kudos to esXpress for taking a good idea and making a good product out of it!]]>
Fast-forward 10 years to today. Microsoft bundles its Hyper-V hypervisor with Windows Server 2008, so anyone who uses Windows Server 2008 also gets Hyper-V.
Is this déjà vu?
Just for fun, let’s play a little game. In the Wikipedia definition of United States v. Microsoft, let’s replace “Web browser” with “hypervisor,” “Netscape” with “VMware,” and “Internet Explorer” with “Hyper-V.”
Here we go:
United States v. Microsoft 87 F. Supp. 2d 30 (D.D.C. 2000) was a set of consolidated civil actions filed against Microsoft Corp. on May 18, 1998 by the United States Department of Justice (DOJ) and twenty U.S. states. Joel I. Klein was the lead prosecutor. The plaintiffs alleged that Microsoft abused monopoly power in its handling of operating system sales and hypervisor sales. The issue central to the case was whether Microsoft was allowed to bundle its flagship Hyper-V hypervisor software with its Microsoft Windows operating system.
Bundling them together is alleged to have been responsible for Microsoft’s victory in the hypervisor wars as every Windows user had a copy of Hyper-V. It was further alleged that this unfairly restricted the market for competing hypervisors (such as VMware Inc.) that were slow to download over a modem or had to be purchased at a store.
Underlying these disputes were questions over whether Microsoft altered or manipulated its application programming interfaces (APIs) to favor Hyper-V over third-party hypervisors, Microsoft’s conduct in forming restrictive licensing agreements with OEM computer manufacturers, and Microsoft’s intent in its course of conduct.
Microsoft stated that the merging of Microsoft Windows and Hyper-V was the result of innovation and competition, that the two were now the same product and were inextricably linked together and that consumers were now getting all the benefits of IE for free. Those who opposed Microsoft’s position countered that the hypervisor was still a distinct and separate product which did not need to be tied to the operating system, since a separate version of Hyper-V was available for Mac OS. They also asserted that IE was not really free because its development and marketing costs may have kept the price of Windows higher than it might otherwise have been. The case was tried before U.S. District Court Judge Thomas Penfield Jackson. The DOJ was initially represented by David Boies.
That was fun. Just a game, of course.
But it appears to me that by bundling Hyper-V with Windows Server 2008, Microsoft has the same advantage over VMware, Citrix and any other hypervisor provider that it had over Netscape Navigator in the 1990s.
This time around, Microsoft is probably safe from a lawsuit, though.
According to Wikipedia, “On November 2, 2001, the DOJ reached an agreement with Microsoft to settle the case. The proposed settlement required Microsoft to share its application programming interfaces with third-party companies. … However, the DOJ did not require Microsoft to change any of its code nor prevent Microsoft from tying other software with Windows in the future. … Nine states and the District of Columbia did not agree with the settlement, arguing that it did not go far enough to curb Microsoft’s anti-competitive business practices. On June 30, 2004, the U.S. appeals court unanimously approved the settlement with the Justice Department, rejecting objections from Massachusetts that the sanctions were inadequate.”
I’m not saying that what happened to Netscape will happen to VMware. In the virtualization market, VMware is in a strong position with a huge lead over Microsoft, and VMware’s products are far more mature and feature-rich. Hell, Microsoft doesn’t even offer live migration yet.
But some analysts predict that history will repeat itself.
I hope all the existing hypervisor vendors can co-exist. It gives users choice and keeps costs down, as we saw when both Citrix and VMware reduced their hypervisor prices to zero this year.
I’d like to hear your feedback on the virtualization industry and whether you think VMware can maintain its position as industry leader.]]>
In response to Marathon’s blog dissin’ the upcoming feature, Palo Alto, Calif.-based VMware‘s Mike DePetrillo, a principal systems engineer, wrote a blog defending VMware Fault Tolerance.
For starters, Marathon complained that VMware does not provide component-level fault tolerance. “The most common failures that result in unplanned downtime are component failures such as storage, NIC [network interface card] or controller failures. Yet VMware Fault Tolerance doesn’t do anything to protect against I/O, storage or network failures.”
DePetrillo noted that VMware already has features to protect again component failure. “If your NIC fails you’ve got NIC teaming built into the system. To set it up simply plug in both NICs to the server, go into the network panel and attach both of them to the same virtual switch. Done. Four clicks. Same thing for storage with the built-in SAN [storage area network] multipathing drivers,” DePetrillo wrote. “I absolutely agree with the author that component failures are the cause of most crashes and that’s why VMware added these features in 2002. VMware FT is not designed for component failure because there’s no sense in moving the VM to another host if you’ve simply lost a NIC uplink. NIC teaming will take care of that with ease and is a LOT cheaper than using CPU and memory resources on another host to overcome the failure.”
Marathon’s second beef: VMware’s fault tolerance is too complex. “In order to use VMware Fault Tolerance, you’ll first have to install both VMware HA [High Availability] and DRS [Distributed Resource Scheduler]. No small feat in and of themselves. Then, because VMware FT requires NIC teaming, you’ll also have to manually install paired NICs. Then you’ll need to manually set up dual storage controllers (with the software to manage them) because it requires multipathing. And to top it all off, you’re required to use an expensive, and often complicated, SAN.”
DePetrillo said the process requires checking off two boxes – HA and DRS. That’s it. “If that’s too hard then please comment and let me know how it could possibly be easier. Even my dog has figured out how to do this now. Granted, it’s a pretty smart dog.”
“As for setting up the dual NICs and dual HBAs [host bus adapters], well, yes, you have to actually plug the physical devices in. After you’ve done that the **built-in** NIC teaming and HBA drivers will take over and configure most everything for you. The NIC teaming does require four extra clicks. The HBA drivers actually figure out the failover paths, match them up, and set up the appropriate form of failover all auto-magically. They’ve been doing this since ESX 1.5 (6 years ago),” DePetrillo blogged.
“Lastly, yes, this requires shared storage. Pretty sure that most environments that want FT (no downtime what-so-ever because our business could lose millions) already have a SAN to take advantage of other things virtualization related such as DRS and VMotion,” he wrote.
Also, VMware FT does not require dual NICs or dual HBAs because, DePetrillo said, “This is something you should have in every virtualization setup that’s running VMs you care anything about, but it’s not a requirement to get VMware FT [Fault Tolerance] running.”
The last point Marathon makes that’s worth spending any time on is that VMware offers onlylimited CPU fault tolerance. “With VMware FT, you’ll need to set up what VMware refers to as a “record/replay” capability on both a primary and secondary server. If something happens to the primary server, the record is stored on the SAN and then restarted on the secondary server. … The whole thing depends on the quality of the SAN. Second, in the words of the VMware engineer who presented at VMworld, “this can take a couple of seconds.” So what happens to your application state in those couple of seconds?”
DePetrillo’s defense is that “if you’re the type of company that requires absolutely no downtime for an app — if the app is just that critical — then I’m pretty sure you’re going to have a decent SAN. … If you’re having so many problems with your SAN that you don’t trust it for FT, then you have much bigger issues at hand that VMware or Marathon or any of the other virtualization related vendors aren’t going to help you with.”
You can read more of VMware’s comments on DePetrillo’s blog, which gets into some details on how VMware Fault Tolerance will work, and vice versa for Marathon.
But I think it is obvious that Marathon is making VMware’s fault tolerance feature seem worse than it is, and VMware is making its new feature seem simpler than it is.
For the most part, this is a pissing contest between the incumbent fault-tolerance vendor and the “new guy,” but the fact of the matter is, if you use VMware virtualization, you can’t use Marathon Technologies because they don’t support VMware (obviously) and if you use Citrix Systems’ XenServer, you can’t use VMware Fault Tolerance, so these arguments are moot.]]>
As an investor, this wasn’t my happiest week (I always felt it was odd to invest money in people who invest money), but for a lot of others, last week must have been miserable indeed. Among those who are feeling miserable right now are IT staffers at Bank of America, who must now acquire a global IT infrastructure as their company acquires Morgan Stanley. And of course, federal IT staff are now worrying about how to oversee the essentially nationalized AIG. That’s not to mention the IT teams at numerous other companies engaged in mergers and acquisitions.
This is the time for server virtualization to shine. Bank of America should lead the charge in making efficient use of virtualization in their acquisition of Morgan Stanley. BofA is going to inherit an immense quantity of hardware, not to mention enormous heating/cooling/electric bills, colossal real estate costs and a titanic regulatory compliance project as it tries to integrate its own IT infrastructure with Morgan’s. If BofA (or any acquiring company for that matter) is smart, it will use virtualization to physical-to-virtual (P2V) every possible asset, transport to its own data center and import those virtual systems.
Bank of America shouldn’t just P2V low-hanging fruit, either — it should reach for the stars. Then it should shut down that physical hardware, wipe it and sell it to help offset the project costs. There are obviously a lot of nuanced steps involved in making this happen, but all the major pain points to which virtualization presents solutions are all the major pain points in integrating a new IT infrastructure:
1) Server move/change/add/remove
2) Power costs
3) Real estate costs
4) Heating and cooling
5) Configuration management
6) Asset management
The difference between the slow-rolling projects in most companies and the aggressive plan I recommend is night and day. The ROI in a progressive rollout can be achieved over time, integrated into the budget and then applied over that time. The costs of an acquisition and the integration of that acquisition’s IT assets are immediate and immense.
Virtualization can provide those long-term benefits in the short term — the elimination of real estate, cooling and power costs alone will offset the cost of licensing and storage. The enhanced backup and retention possible with virtualized systems will go a long way towards easing regulatory concerns of data retention.]]>
Blades have come a long way since the early days of very few options and limited expandability. Most early blade servers only had one or two NICs, limited storage, no Fibre Channel support, and limited CPU and memory, which made them poor choices for virtual hosts. That’s all changed in recent years as blade technology has evolved and no longer has the limitations of earlier blades, making them ideal for virtual host servers. Modern blade servers can support up to 16 NICs, four quad-core processors and multiple Fibre Channel or iSCSI HBA adapters. When considering blade servers in your environment as an alternative to traditional rack mount servers, you need to know the advantages and disadvantages of each and why you might choose one type over another.
Some reasons you might choose blade servers over traditional servers:
Some reasons you might choose traditional servers over blade servers:
Many people that use blade servers as virtual hosts often take advantage of the boot-from-SAN feature so they don’t need internal storage on their blade servers. The choice between blade and traditional servers often comes down to personal preference and what type of server is already in use in your data center. Some people like blades, others don’t. Regardless of which server type you choose, they both work equally well as virtual hosts.]]>
[kml_flashembed movie="http://www.youtube.com/v/dNvb6JWgQcw" width="425" height="350" wmode="transparent" /]