Virtualization administrators are quite aware of the risk of virtual machine sprawl. How your virtual environment will grow or shrink will vary based on many factors. For new implementations, there is a generally a large front-loading of virtual machines from conversions or migrations from older environments. The ongoing mode, however, may be a little more predictable. V-Scout from Embotics has a great feature buried in the reporting engine that can give you a view into the VMware guest virtual machine footprint in the population trending report.
The base report is fair enough, as it gets more accurate and reasonable with a good time sample to make predictions. I have just about one month’s worth of data in my V-Scout data base, so now some of the data makes sense. Here is the top of the base report:
There is a quick snapshot of the change in storage, number of virtual machines, operating system count, memory and CPU inventories. In this example, I added a host as well as virtual machines and storage, showing the comprehensive changes nicely for the timeframe.
The real treasure of this report, however, is the bottom option called “VM Details.” This is a nice log of virtual machines that were added and removed in the environment. While there may be a net gain of two virtual machines in the timeframe example above, this detail will show how we arrived at that count. This can help explain virtual machine additions and removals that may be part of a special task or project, and they can all be viewed in the bottom of the VM population trending report as shown below:
V-Scout’s intuitive interface continues to bring good information closer to the virtualization administrator. V-Scout is a free management application that can plug into VMware Infrastructure 3 environments and can be downloaded from the Embotics website.
Having seen a lot of anti-VMware propaganda coming out of the Microsoft marketing machine lately, it strikes me that Microsoft is desperate to do anything to try to catch up and compete with VMware. One example is the VMwareCostsWayTooMuch.com website, which it recently launched in conjunction with passing out $1 chips and flyers at VMworld. What’s next, Microsoft? Late-night TV infomercials on Hyper-V proclaiming its greatness? You might see if George Foreman is available — you could call it the lean, mean, cost-reducing virtualization machine.
Microsoft’s tactics strike me as childish. Instead of trying to mislead people, the company should spend its time and money making a product that can actually compete with VMware. Microsoft tries to push the cost issue without looking at the big picture numbers and the features you get with each product. VMware costs more because you get more with it; you get a proven, mature and feature-rich product with many integration, management and automation components.
Microsoft is way behind in the enterprise virtualization game and has a lot of catching up to do. VMware’s recent announcements at VMworld puts Microsoft even farther back in VMware’s rear-view mirror. Microsoft should be doing everything it can to polish its 1.0 product and add some of the many features and functionality that ESX already has. Good products tend to speak for themselves. Once Microsoft has a product that can stand up to ESX, it won’t be forced to sink to the guerilla marketing level to sell its product. I guess at this point Microsoft has to do everything it can to try and achieve global domination of the virtualization market. Maybe it’s time for VMware to start its own website, along the lines of HyperVLacksFeatures.com — but then again, why sink to Microsoft’s level?
I had the opportunity to spend a little time at the esXPress booth at VMworld 2008 this year, and I could kick myself. Hard.
To go to the start of why … a long time ago, back when my office primarily used VMware GSX3 for virtulization at the server level, I had a real need to do backups of the virtual machine disk files (VMDK). My GSX hosts were Linux servers and I used a simple cron job to launch scripts on a schedule, which triggered a suspension, tarring of the VMs and scp-ing of the tarballs to a network-attached storage (NAS) box before re-starting the guests. It let me avoid buying backup licenses for my guests (which were mostly pre-production units, image builds, etc.) and gave me a complete point-in-time recovery solution better than anything I could buy off the shelf (at the time). It ws so efficient that when my company joined the Core Customer Program, I was asked to give a webinar on the topic. Sadly, that webinar is now so out-of-date that it’s been pulled from VMware’s site and I can’t find it on archive.org.
Now why would I kick myself? Because that simple idea is at the root of esXpress. It does it a lot better than I did and focuses on ESX rather than GSX/Server, but at the core it’s very similar. It gets around the need for downtime and uses gzip under the hood rather than tar, but it has a Linux OS guest that essentially copies, compresses and offloads other guests. I was pretty impressed by how simply and efficiently the product works, though I must admit to being bit jealous — if only I had realized there was a product there in that idea.
So kudos to esXpress for taking a good idea and making a good product out of it!
You may recall the 1998 anti-trust case in which Microsoft Corp. was sued by the U.S. for bundling Internet Explorer with its Windows operating system, because doing so gave Microsoft an advantage that led to the demise of the incumbent Web browser, Netscape Navigator.
Fast-forward 10 years to today. Microsoft bundles its Hyper-V hypervisor with Windows Server 2008, so anyone who uses Windows Server 2008 also gets Hyper-V.
Is this déjà vu?
Just for fun, let’s play a little game. In the Wikipedia definition of United States v. Microsoft, let’s replace “Web browser” with “hypervisor,” “Netscape” with “VMware,” and “Internet Explorer” with “Hyper-V.”
Here we go:
United States v. Microsoft 87 F. Supp. 2d 30 (D.D.C. 2000) was a set of consolidated civil actions filed against Microsoft Corp. on May 18, 1998 by the United States Department of Justice (DOJ) and twenty U.S. states. Joel I. Klein was the lead prosecutor. The plaintiffs alleged that Microsoft abused monopoly power in its handling of operating system sales and hypervisor sales. The issue central to the case was whether Microsoft was allowed to bundle its flagship Hyper-V hypervisor software with its Microsoft Windows operating system.
Bundling them together is alleged to have been responsible for Microsoft’s victory in the hypervisor wars as every Windows user had a copy of Hyper-V. It was further alleged that this unfairly restricted the market for competing hypervisors (such as VMware Inc.) that were slow to download over a modem or had to be purchased at a store.
Underlying these disputes were questions over whether Microsoft altered or manipulated its application programming interfaces (APIs) to favor Hyper-V over third-party hypervisors, Microsoft’s conduct in forming restrictive licensing agreements with OEM computer manufacturers, and Microsoft’s intent in its course of conduct.
Microsoft stated that the merging of Microsoft Windows and Hyper-V was the result of innovation and competition, that the two were now the same product and were inextricably linked together and that consumers were now getting all the benefits of IE for free. Those who opposed Microsoft’s position countered that the hypervisor was still a distinct and separate product which did not need to be tied to the operating system, since a separate version of Hyper-V was available for Mac OS. They also asserted that IE was not really free because its development and marketing costs may have kept the price of Windows higher than it might otherwise have been. The case was tried before U.S. District Court Judge Thomas Penfield Jackson. The DOJ was initially represented by David Boies.
That was fun. Just a game, of course.
But it appears to me that by bundling Hyper-V with Windows Server 2008, Microsoft has the same advantage over VMware, Citrix and any other hypervisor provider that it had over Netscape Navigator in the 1990s.
This time around, Microsoft is probably safe from a lawsuit, though.
According to Wikipedia, “On November 2, 2001, the DOJ reached an agreement with Microsoft to settle the case. The proposed settlement required Microsoft to share its application programming interfaces with third-party companies. … However, the DOJ did not require Microsoft to change any of its code nor prevent Microsoft from tying other software with Windows in the future. … Nine states and the District of Columbia did not agree with the settlement, arguing that it did not go far enough to curb Microsoft’s anti-competitive business practices. On June 30, 2004, the U.S. appeals court unanimously approved the settlement with the Justice Department, rejecting objections from Massachusetts that the sanctions were inadequate.”
I’m not saying that what happened to Netscape will happen to VMware. In the virtualization market, VMware is in a strong position with a huge lead over Microsoft, and VMware’s products are far more mature and feature-rich. Hell, Microsoft doesn’t even offer live migration yet.
I’d like to hear your feedback on the virtualization industry and whether you think VMware can maintain its position as industry leader.
During VMworld 2008 in Las Vegas last week, VMware Inc. announced its upcoming fault tolerance feature and gave a demonstration of it during one of the keynote sessions. It looked pretty good and simple to use, but Littleton, Mass.-based Marathon Technologies Corp., a company that specializes in fault tolerance software, had plenty to say otherwise.
For starters, Marathon complained that VMware does not provide component-level fault tolerance. “The most common failures that result in unplanned downtime are component failures such as storage, NIC [network interface card] or controller failures. Yet VMware Fault Tolerance doesn’t do anything to protect against I/O, storage or network failures.”
DePetrillo noted that VMware already has features to protect again component failure. “If your NIC fails you’ve got NIC teaming built into the system. To set it up simply plug in both NICs to the server, go into the network panel and attach both of them to the same virtual switch. Done. Four clicks. Same thing for storage with the built-in SAN [storage area network] multipathing drivers,” DePetrillo wrote. “I absolutely agree with the author that component failures are the cause of most crashes and that’s why VMware added these features in 2002. VMware FT is not designed for component failure because there’s no sense in moving the VM to another host if you’ve simply lost a NIC uplink. NIC teaming will take care of that with ease and is a LOT cheaper than using CPU and memory resources on another host to overcome the failure.”
Marathon’s second beef: VMware’s fault tolerance is too complex. “In order to use VMware Fault Tolerance, you’ll first have to install both VMware HA [High Availability] and DRS [Distributed Resource Scheduler]. No small feat in and of themselves. Then, because VMware FT requires NIC teaming, you’ll also have to manually install paired NICs. Then you’ll need to manually set up dual storage controllers (with the software to manage them) because it requires multipathing. And to top it all off, you’re required to use an expensive, and often complicated, SAN.”
DePetrillo said the process requires checking off two boxes – HA and DRS. That’s it. “If that’s too hard then please comment and let me know how it could possibly be easier. Even my dog has figured out how to do this now. Granted, it’s a pretty smart dog.”
“As for setting up the dual NICs and dual HBAs [host bus adapters], well, yes, you have to actually plug the physical devices in. After you’ve done that the **built-in** NIC teaming and HBA drivers will take over and configure most everything for you. The NIC teaming does require four extra clicks. The HBA drivers actually figure out the failover paths, match them up, and set up the appropriate form of failover all auto-magically. They’ve been doing this since ESX 1.5 (6 years ago),” DePetrillo blogged.
“Lastly, yes, this requires shared storage. Pretty sure that most environments that want FT (no downtime what-so-ever because our business could lose millions) already have a SAN to take advantage of other things virtualization related such as DRS and VMotion,” he wrote.
Also, VMware FT does not require dual NICs or dual HBAs because, DePetrillo said, “This is something you should have in every virtualization setup that’s running VMs you care anything about, but it’s not a requirement to get VMware FT [Fault Tolerance] running.”
The last point Marathon makes that’s worth spending any time on is that VMware offers onlylimited CPU fault tolerance. “With VMware FT, you’ll need to set up what VMware refers to as a “record/replay” capability on both a primary and secondary server. If something happens to the primary server, the record is stored on the SAN and then restarted on the secondary server. … The whole thing depends on the quality of the SAN. Second, in the words of the VMware engineer who presented at VMworld, “this can take a couple of seconds.” So what happens to your application state in those couple of seconds?”
DePetrillo’s defense is that “if you’re the type of company that requires absolutely no downtime for an app — if the app is just that critical — then I’m pretty sure you’re going to have a decent SAN. … If you’re having so many problems with your SAN that you don’t trust it for FT, then you have much bigger issues at hand that VMware or Marathon or any of the other virtualization related vendors aren’t going to help you with.”
You can read more of VMware’s comments on DePetrillo’s blog, which gets into some details on how VMware Fault Tolerance will work, and vice versa for Marathon.
But I think it is obvious that Marathon is making VMware’s fault tolerance feature seem worse than it is, and VMware is making its new feature seem simpler than it is.
For the most part, this is a pissing contest between the incumbent fault-tolerance vendor and the “new guy,” but the fact of the matter is, if you use VMware virtualization, you can’t use Marathon Technologies because they don’t support VMware (obviously) and if you use Citrix Systems’ XenServer, you can’t use VMware Fault Tolerance, so these arguments are moot.
Last week at VMworld ’08, while living in the glitz of Vegas for a week of product news, press releases, interviews and judging the Best of VMworld entires with my TechTarget colleagues, my constantly buzzing BlackBerry delivered the latest financial news — the collapse of Lehman, the fall of Morgan, the implosion of AIG — all saying the doom of the market is upon thee.
As an investor, this wasn’t my happiest week (I always felt it was odd to invest money in people who invest money), but for a lot of others, last week must have been miserable indeed. Among those who are feeling miserable right now are IT staffers at Bank of America, who must now acquire a global IT infrastructure as their company acquires Morgan Stanley. And of course, federal IT staff are now worrying about how to oversee the essentially nationalized AIG. That’s not to mention the IT teams at numerous other companies engaged in mergers and acquisitions.
This is the time for server virtualization to shine. Bank of America should lead the charge in making efficient use of virtualization in their acquisition of Morgan Stanley. BofA is going to inherit an immense quantity of hardware, not to mention enormous heating/cooling/electric bills, colossal real estate costs and a titanic regulatory compliance project as it tries to integrate its own IT infrastructure with Morgan’s. If BofA (or any acquiring company for that matter) is smart, it will use virtualization to physical-to-virtual (P2V) every possible asset, transport to its own data center and import those virtual systems.
Bank of America shouldn’t just P2V low-hanging fruit, either — it should reach for the stars. Then it should shut down that physical hardware, wipe it and sell it to help offset the project costs. There are obviously a lot of nuanced steps involved in making this happen, but all the major pain points to which virtualization presents solutions are all the major pain points in integrating a new IT infrastructure:
1) Server move/change/add/remove
2) Power costs
3) Real estate costs
4) Heating and cooling
5) Configuration management
6) Asset management
The difference between the slow-rolling projects in most companies and the aggressive plan I recommend is night and day. The ROI in a progressive rollout can be achieved over time, integrated into the budget and then applied over that time. The costs of an acquisition and the integration of that acquisition’s IT assets are immediate and immense.
Virtualization can provide those long-term benefits in the short term — the elimination of real estate, cooling and power costs alone will offset the cost of licensing and storage. The enhanced backup and retention possible with virtualized systems will go a long way towards easing regulatory concerns of data retention.
Blades have come a long way since the early days of very few options and limited expandability. Most early blade servers only had one or two NICs, limited storage, no Fibre Channel support, and limited CPU and memory, which made them poor choices for virtual hosts. That’s all changed in recent years as blade technology has evolved and no longer has the limitations of earlier blades, making them ideal for virtual host servers. Modern blade servers can support up to 16 NICs, four quad-core processors and multiple Fibre Channel or iSCSI HBA adapters. When considering blade servers in your environment as an alternative to traditional rack mount servers, you need to know the advantages and disadvantages of each and why you might choose one type over another.
Some reasons you might choose blade servers over traditional servers:
- Rack density is better for data centers where space is a concern. Up to 50% more servers can be installed in a standard 42U rack compared with traditional servers.
- Blade servers provide easier cable management as they simply connect to a chassis and need no additional cable connections.
- Blade servers have lower power consumption than traditional servers because of reduced power and cooling requirements.
- Blade servers can be cheaper than traditional servers when comparing a fully populated chassis with the equivalent number of traditional servers.
Some reasons you might choose traditional servers over blade servers:
- Traditional servers have more internal capacity for local disk storage. Blade servers typically have limited local disk storage capacity due to the limited drive bays. Some blade vendors now have separate storage blades to expand blade storage, but this takes up additional slots in the blade chassis.
- Traditional servers have more expansion slots available for network and storage adapters. Blade servers typically have very few or no expansion slots. Virtual hosts are often configured with many NICs to support the console network, vmKernel network, network-attached storage and virtual machine networks. Additional network adapters are also needed to provide failover and load balancing.
- Once a chassis is full, purchasing a new chassis to add a single new additional server can be costly. Traditional servers can be installed without any additional infrastructure components.
- Traditional servers are often less complicated to set up and manage than blade servers.
- Traditional servers have multiple USB ports for connecting external devices and also an optical drive for loading software on the host. They also have serial and parallel ports, which are sometimes used for hardware dongles for licensing software. Additionally, tape backup devices can be installed in them. Blade servers make use of virtual devices that are managed through the embedded hardware management interfaces.
Many people that use blade servers as virtual hosts often take advantage of the boot-from-SAN feature so they don’t need internal storage on their blade servers. The choice between blade and traditional servers often comes down to personal preference and what type of server is already in use in your data center. Some people like blades, others don’t. Regardless of which server type you choose, they both work equally well as virtual hosts.
Having a virtual machine expiration date is an important procedural step in successfully managing a virtual environment. Managing the expiration date in VMware environments is challenging without adding pricey tools such Lifecycle Manager or other commercial management products. In this video blog, Rick Vanover discusses one way to get the expiration date into your virtual environment without adding direct costs:
Vizioncore, the company best known for its virtualization backup software vRanger Pro (formerly known as esxRanger) hoped to make a strong impression at VMworld with lots of new products being unveiled.
On display was the next vRanger release, version 4.0. Highlights of the new release are a faster engine, a redone GUI, more granular file-level recovery and the new vAPI 1.0 that allows third-party software providers to leverage the Ranger technology. Expect to see news about partners writing to the new application programming interface after VMworld, said Chris Akerberg, Vizioncore’s president and COO.
The company has also delved into the virtual sprawl reduction business with the introduction of vOptimizer, a new tool that enables administrators to easily shrink and expand VMware Virtual Machine Disks. The tool targets the masses of overallocated virtual machines and provides an easy alternative to what is otherwise a cumbersome, manual process, Akerberg said.
Last but not least, the company has announced the version 4.0 of vConverter, its physical-to-virtual migration tool. New features include synchronized cutover, continuous protection via incremental replication, automated remote cold migration, task profiles and the so-called Quick Convert feature.
One caveat: While Vizioncore showcased these products at VMworld, the offerings won’t be generally available until later this fall.
With all due respect to VMware’s new CEO Paul Maritz, the portion of yesterday’s keynote discussing VMware’s new vClient initiative didn’t seem to register much with VMworld attendees.
After the address by VMware CTO Steve Herrod, however, was a different story. Assisted by VMware’s Jerry Chen, Herrod and Chen finally got a rise out of the audience, who applauded loudly to a demonstration of 25 virtual machines being provisioned out to thin clients and laptops, then updating the master VM image with Google Chrome using ThinApp.
“I need that right now,” said the attendee sitting behind me at the conclusion of Chen’s demonstration. “Heck, I needed that yesterday.”
I think part of the crowd’s enthusiasm simply had to do with finally “getting it.” Unlike Maritz, Chen used the word ‘hypervisor’ to describe the “thin-client virtualization layer” that drives VMware’s vClient idea of being able to manage disconnected laptops as well as connected VDI thin clients. By saying the H word, 14,000 VMworld attendees had a collective aha moment.
Whatever the case, with vClient, VMware has once again taken a top-down approach, tackling the enterprise’s “desktop dilemma” rather than that of the consumer or SMB. In a subsequent conversation with VMware senior director of product marketing Bogomil Balkansky, he said it’s not that those segments don’t have desktop dilemmas of their own, rather, “the problems of the enterprise are very well identified,” and thus, for VMware, the enterprise is “a much easier entry point.”
Looking out a few years, however, Balkansky described a distinctly consumer-focused scenario. Home users today run full-fledged PCs, complete with a host OS, and all the attending management issues. At the same time, home users engage largely in web-focused activities. “Given that everything I do is Web-connected, why isn’t that part of my DSL service?” Balkansky asked rhetorically.
In other words, Balkansky is insinuating that someday, users’ personal desktops will run as VDI images hosted by the Verizons and Comcasts of the world rather than locally on their home PCs. For a small monthly fee, users will enjoy the convenience of a centrally managed, backed up desktop that they can access from anywhere, and easily recover even if their disk drive fails or laptop is stolen. That’s an idea that just about everyone can get their head around.