The Virtualization Room


November 19, 2007  2:54 PM

The Little Xengine That Could

Joseph Foran Profile: Joe Foran

As reported in a number of other places, Virtual Iron has been making some great deals lately. They’ve picked up a new CEO, received a large sum (13m) in their most recent round of financing, and have been releasing products fast enough to keep the buzz going even though some (including me) have questioned their viability in light of the Xen/Citrix merger. While there’s no clear word on VI’s strategy for dealing with the merger’s consquences to the codebase, it’s clear that they’re doing the rigjht thing – focussing less on the merger and more on continuing their campaign against VMware. Namely, they’ve been forging ahead with their partnership with Platespin. This partnership has interesting benefits – for those few unhappy VMware customers who are happy with virtualization but not with VMware itself, it’s quite easy to make the change to Virtual Iron VMs using Platespin. It also lends VI an enterprise-credibility because of Platespin’s pervasiveness in the enterprise P2V / V2P / P2P / V2V market.

Then there’s always the price-war Virtual Iron started with VMware. Virtual Iron is not kidding when they say their prices are 20% of the cost of VMware’s VI3 Enterprise. Couple this to the fact that VMware still can’t manage to get the SKU out for their Mid-Sized Acceleration Kit, and Virtual Iron has a strong chance of remaining a serious (if small) competitor to VMware over the long term. In the end, this can only be good for the consumer in the smaller enterprises that Virtual Iron targets. With the backing of Intel, AMD, Platespin, and the of OEM alliances VI has made (HP and IBM offer Virtual Iron and VMware on their hardware), Virtual Iron is looking strong in the face of all comers – Citrix and VMware included.

What about Viridian? I’m waiting on that… given what I think of Virtual Server (nice toy), Vista (insert expletives here), and Server 2k8 (hyper-hype), I’m not any near convinced that Microsoft will put out a real hypervisor to compete with VMware or Xen. Truthfully, I’m more interested in what Phoenix is doing… but that’s for another blog. Time will tell.

Is VMware a better product? Yes, it’s far more mature, and has a much greater support based,  it’s also not being limited the way Virtual Iron is by Xen’s requirement to have newer AMD or Intel virtualization-friendly CPUs to run Windows natively. I think real question is this – Is VMware a superior product? On that, I’d have to say no – the little Xengine That Could has caught up quickly, serves similar markets, and beats them on price.

November 15, 2007  12:53 PM

VMware, Oracle redux: Virtualization Log

Alex Barrett Alex Barrett Profile: Alex Barrett

How “three times less overhead” became “three times better performance” is beyond me; but whatever the case, the issue of database performance in a VM is hot again, with VMware bloggers firing back at Oracle’s superiority claims. But with Oracle’s clout in the enterprise, analysts seem to think that IT shops will take a good, hard look at the latest Xen variant.

If you’re testing Xen, we have a new tip for you on hardware drivers in a paravirtualized Xen environment, and the vagaries of dom0, domU, QEMU and the like. And the takeaway is this: Hardware-driver issues become quite complicated on a platform that supports both paravirtualized and fully virtualized drivers.

Meanwhile, over at SearchVMware.com, we learn that VMware’s brand of paravirtualization — paravirt-ops and the Virtual Machine Interface (VMI) — is wowing early adopters. By running paravirtualized Ubuntu on VMware Workstation, blogger Mark Mayo witnessed impressive performance gains compared with running it with VMI disabled.

Also, for those of you following the Microsoft Viridian — ahem, Hyper-V — developments, SearchWinIT news director Margie Semilof uncovers some inconsistencies in Microsoft’s claim that Hyper-V will be a “standalone” and “bare metal” hypervisor. “The reason for all the guessing,” she wrote, “is that Microsoft has offered an architectural picture of Hyper-V that runs on Windows certified hardware and drivers. Since that’s the case, ‘something like Server Core or PE must be inside.'” she quotes Nelson Ruest, a Microsoft MVP and principal at Resolutions Enterprise, a consulting firm in Victoria, B.C., as saying.


November 14, 2007  12:23 PM

Return of the MAC

Kutz Profile: Akutz

Chris Wolf and I were presenting Virtualization 101 in Seattle yesterday when something he said sparked an idea in my usually dormant brain. Okay, it’s not usually dormant, but Seattle is so cold I think half of my synapses aren’t firing! In the process of discussing virtual machines (VMs), Chris mentioned that each major virtualization solutions provider has registered itself with the Institute of Electrical and Electronics Engineers (IEEE) and received one or more Organizationally Unique Identifiers (OUIs). An OUI is 24-bit number that makes up the first half of all of the Media Access Control (MAC) addresses assigned by an organization to devices it produces. MAC addresses are most frequently associated with Ethernet adapters, so why are virtualization vendors registering with the IEEE to obtain OUIs?

Virtualization vendors also produce Ethernet adapters — virtual network interface cards (NICs). Most VMs would be rather useless if they could not access some sort of network, so virtualization vendors must create virtual NICs in order for the VMs to get on the big wide world of Webs. And since these virtual NICs have to participate on the network just as if they were physical, they must use MAC addresses. Because the first 24 bits of these MAC addresses, the OUI, is organization-specific, there is a real potential for network administrators to detect not only if a machine on the network is virtual by its MAC address, but also what type of virtual machine it is (what vendor’s software is hosting it). While best practices dictate that you do not change the MAC address of VMs, enterprise virtualization solutions do present this as an option, and, because of this, here is the scenario I see occurring.

One way to harden the Apache Web server is to use mod_security to alter the Web server’s signature. For example, you can fool clients into thinking that the Web server hosting their favorite videos is actually a Microsoft Internet Information Systems (IIS) 5.0 server instead of Apache 2.2. Administrators do this in order to fool attackers into attempting the wrong types of attack vectors. Even though best management practices dictate that administrators NOT alter their VMs’ MAC addresses, I forsee them doing so anyway in order to fool would-be hackers into attempting the incorrect attack vectors on VMs. For example, if a VM is hosted on ESX and its MAC address has an OUI registered by Microsoft, then a would-be attacker may try known Microsoft Virtual Server or Hyper-V exploits on the VM instead of ESX exploits.

Who knows? Twelve months from now altering a VM’s MAC address to be that of another vendor may be considered a best practice, but right now, with the already complex problem of managing virtual hardware, IT administrators are best served to leave their VM MAC addresses well enough alone.

Of course, that doesn’t stop the idea from being completely and utterly cool!

Hope this helps!


November 14, 2007  10:09 AM

VMware Server lives on: Virtualization Log

Alex Barrett Alex Barrett Profile: Alex Barrett

It may be free, but it’s not forgotten — or is it? VMware Server 2 is officially in beta, and while Yankee Group’s Gary Chen took the release as a sign of continued development on the part of VMware, virtualization.info’s Alessandro Perilli isn’t so sure:

It’s evident the company is spending most of its R&D and marketing efforts around ESX Server and Workstation. In 11 months no company (in a big ecosystem of over 200 technology partners) developed products for VMware Server, despite its price and wide feature-set could push adoption like no others.

In other virtualization news, Sun Microsystems Inc. CEO Jonathan Schwartz will take the stage at Oracle’s OpenWorld conference today and is expected to officially announce the company’s xVM virtualization strategy: an x86 hypervisor based on Xen and Sun xVM Ops Center for unified management. Sun has also rounded up support for xVM from Advanced Micro Devices (AMD), Intel, Microsoft, MySQL, Quest Software and Symantec and is launching OpenxVM.org, which Sun describes as “an open source community for developers building next-generation datacenter virtualization and management technologies.”

Speaking of Oracle, VMware isn’t taking this whole Oracle VM thing lying down. Going live today on VMware’s Web site is a page devoted to running Oracle on VMware ESX, with links to white papers, case studies and other resources.

Meanwhile, on our newest site, SearchVMware.com, check out a tip by blogger extraordinaire Scott Lowe on Virtual Desktop Infrastructure and connection brokers, where he breaks down for us exactly what a connection broker does. Pool management, anyone?

Editors’ note: Virtualization Log is a daily roundup of virtualization news and tips published on SearchServerVirtualization, as well as on sister TechTarget publications.


November 13, 2007  6:27 PM

ESX 3i is rocking it on the skinny

Rick Vanover Rick Vanover Profile: Rick Vanover

ESX 3i is a great opportunity not only to reduce the local storage footprint requirement, but also provide additional RAM and CPU available to virtual machines.  ESX 3i is a small footprint, featuring a hardware integrated hypervisor that provides the VMWare ESX server on a small local footprint at around 32 MB.

While the small footprint is attractive in providing a quick install and minimal build time for adding additional hosts and consistent configuration linearly to the VMWare Infrastructure.  The other more attractive, and possibly overlooked piece, is that by removing the customized Red Hat Enterprise Linux (RHEL) operating system that hosts the hypervisor in ESX 3.0.2 and earlier, this can free up between two or three percent of local CPU and RAM resources.  Alone those are not much, but consider a large VMWare implementation – saving that much local resources can effectively reduce your number of required VMWare ESX hosts systems by providing that much more resources back to the virtual machines.  For example, if you have 100 VMWare ESX hosts, you have effectively added the CPU and RAM power of 2 or 3 VMWare hosts by removing the RHEL layer from the host with certain host configurations.

This is a positive direction for the ESX product.  For those of you who are historically Windows administrators, how frequently have you tried to do something in the ESX RHEL that didn’t quite turn out as you expected?  My secondary virtualization mentor told me:

“If you don’t know anything about Linux or Unix – that will be great for an ESX administrator.  If you do have experience there, don’t make assumptions based on the standard product” when introducing ESX.

For most situations where I have tried to do tasks outside of Virtual Center or the ESX install, some other issue has arisen.  This, of course, is excepted when VMWare documentation gives Linux commands to perform tasks, David Davis’ recent blog on enabling SSH and SFTP on ESX is a good example.  By removing that layer, the ESX product is more aligned to what it needs to do — providing horsepower to guest operating systems with central management.


November 13, 2007  12:01 PM

Oracle crashes VMware’s party: Virtualization Log

Alex Barrett Alex Barrett Profile: Alex Barrett

Does the world really need another hypervisor? Oracle clearly thinks so, and it announced Oracle VM, its version of the open source Xen hypervisor. The cool thing is Oracle pledges to support a wide swath of its enterprise apps running in a VM. But for now, the company’s per-CPU pricing remains unchanged, negating much of the economic incentive to virtualize an application.

In the market for a new server? Dell’s new PowerEdge R900 server is seemingly tailor-made to run as a virtualization host. Among other impressive specs, the 4U behemoth features a 32 dual in-line memory module slots, which by my feeble math, equals 128 GB of total system memory (32 GB*4 GB=128 GB).

Or if its Hewlett-Packard hardware that your data center sports, know that there are some new virtualization features available in HP Insight Control, the company’s x86 server hardware management software. Among them are Virtual Machine Manager 3.0 and the HP Server Migration Pack, which will take you from physical to virtual and back again.

Editors’ note: Welcome to Virtualization Log, a new feature we’re trying out on the SearchServerVirtualization.com blog. Look here for a daily roundup of virtualization news and tips published on the main site and on sister TechTarget publications.


November 7, 2007  11:10 AM

Gartner offers virtualization tips, predictions

Bridget Botelho Profile: Bridget Botelho

At Gartner’s Data Center Summit 2007 in London yesterday, analysts said virtualization will be the most significant factor in adding agility to data centers through 2012.

I think we already figured that, since virtualization can significantly cut back the number of servers, space, power and cooling demands in data centers.

The takeaway from Gartners declaration: If you aren’t at least looking at virtualization for your data center, you are falling behind businesses that already are — and that isn’t a good place to be.

Gartner had some recommendations to organizations planning or implementing virtualization:

– When looking at IT projects, balance the virtualized and unvirtualized services. Also look at the investments and trade-offs;
– Reuse virtualized services across the portfolio. Every new project does not warrant a new virtualization technology or approach;
– Understand the impact of virtualization on the project’s life cycle. In particular, look for licensing, support and testing constraints;
– Focus not just on virtualization platforms, but also on the management tools and the impact on operations;
– Look for emerging standards for the management and virtualization space.


November 6, 2007  10:10 PM

Don’t omit the VMware Tools!

Rick Vanover Rick Vanover Profile: Rick Vanover

So, you are very proud of yourself because you can roll out Windows virtual systems like popcorn, right? Well, don’t forget to ensure that you are using the correct version of VMware tools. This is important because it provides an optimized inventory of hardware for the guest operating system. On a Windows guest operating system (OS), take a look at the device manager and see how many devices have VMware, Inc. listed as the manufacturer for the device. The VMware tools will apply the correct drivers to the SCSI and RAID controllers, network interfaces, video display adapters, and many more.

Why Does This Matter?

The presence of VMware tools is good, but just as important is the version of VMware tools. Each VMware product has its own version of VMware tools, and if you migrate via VMotion or the VMware converter, your version of the tools may be out of date. Some items will be natively recognized with obselete versions of VMware tools, while others may not yet be determined in the Device Manager. The best candidate here is the network interface. For example, if you have a virtual machine hosted on VMware ESX 2.5.4 and you wish to migrate this guest to your newer VMware ESX 3.0.2 system. Your migration via your tool of choice will proceed correctly enough, but you may soon discover an issue with the VM.

How do I Install VMware Tools?

Installing VMware Tools is quite easy, and VMware has provided a knowledge base article for each VMware product (ESX, Workstation, etc.) Click here to view the knowledge base article.


October 31, 2007  11:39 AM

More fuel on the NFS-for-VMware fire

Alex Barrett Alex Barrett Profile: Alex Barrett

Last week I wrote an article on IT environments that chose Network File System (NFS) for their shared VMware storage, and at least one large IT shop corroborates my story. An IT administrator at a well-known investment management firm writes that he runs 45 VMware ESX 3.0.2 hosts that run more than 1,000 VMs entirely on Network Appliance network-attached storage (NAS) 3070 boxes — and with great success.

“We haven’t seen any issue with speed, that is for sure,” he writes.

Before switching to NetApp, the firm ran its environment on EMC and Hitachi storage area networks (SANs). The admin described the latter as “a pain,” “expensive,” and suffering from SCSI lock, manageability and host bus adapter (HBA) issues.

By moving to NetApp NAS, the firm has also realized another benefit: improved data protection. “We also love the fact that we save a lot of money on the backup solution. We just use snaps to another NetApp — no agents, no tapes, no overpaid workers, no maintenance contracts on over 1,000 servers.”

Bless its heart, NetApp also chimed in on the article, taking umbrage at statements made by Fairway Consulting Group CEO James Price. About a year ago, NetApp began testing NFS for VMware at the behest of customers looking for more manageable storage but who were worried about the ability of NAS and NFS to scale. What they found is that “NFS is robust enough to run production environments,” said Vaughn Stewart, virtualization evangelist with the company. In the coming months, NetApp plans to publish results of tests performed in conjunction with VMware.

At the same time, NetApp is working with VMware to get the company on board with NetApp’s “NFS is good” message. As it stands, “VMware is inconsistent throughout its documentation about the role of NFS,” said Phil Brotherton, NetApp senior director of enterprise solutions. It may be a tall order, as the storage community has a strong bias in favor of Fibre Channel SANs.

But in Brotherton’s view, some of that preference is a bit self-serving. “A lot of people are trained on a technology, and that’s a good reason to be biased toward it,” he said, adding that many shops have sunk a lot of money into existing Fibre Channel infrastructure. “But I also see a lot of people try to spin technical arguments to justify what is really a sunken cost argument. . . . I would love to see the discussion move past performance to why people are really using NFS; performance is not the issue.”

When it comes to displacing SAN with NFS, our nameless IT administrator echoed Brotherton’s opinion. “I can tell you that you there are some old-school SAN guys (myself included) that are scared that they might not be needed as much as they think. It is becoming easier and easier to use NFS for most everything. There are certain cases where a SAN is needed, but it is not necessary for every case.”


October 25, 2007  8:32 PM

Does virtualization subvert security?

Kutz Profile: Akutz

A recent discussion on the OpenBSD mailing list led to the assertion that virtualization decreases security. For those interested, a summary of the discussion is available on Kernel Trap. But proponents on both sides of the argument have taken to throwing about emotionally driven comments rather than thinking objectively about the subject. Of course, because the original comment labeled all those as”stupid” and “deluded” who think virtualization somehow contributes to security weaknesses, who can really blame people for getting a bit emotional? All the flame-war commentary aside, the question remains, does virtualization weaken security? The original argument that virtualization can diminish security was based on two points:

  1. If software engineers cannot create an OS or application without bugs, what hope does a virtualization solution have to be bug-free?
  2. x86 hardware is ill-suited for virtualization.

Bug-free software
The first point does two things: it lumps all software engineers, operating systems, and applications into one pool and assumes that it is possible to find bug-free code.

Addressing the sub-points in order, while it is true that software engineers are human (and we make mistakes) and that software in general has a track record of imperfections, it is also true that the world does not judge all software engineers or software to be the same. In fact, I would guess that a lot of members of the OpenBSD mailing list in fact prefer OpenBSD to, Windows, let’s say. However, there are many readers of this blog that may prefer Windows to OpenBSD, or Linux, or OS X, etc. The same preference could be applied to office suites (OpenOffice, StarOffice, MS Office, KOffice, etc.). The fact of the matter is that we all have our own preferences: we do not judge software to be the same.

Secondly, the first point argues that the community should expect a bug-free hypervisor, and anything less contributes to the decrease of the overall security of a server platform. This is a very lofty expectation indeed! A very long time ago I wrote to Slashdot, heck, almost seven years ago now, and asked the question why it was not possible for developers to spend more time on projects and produce bug-free software. Commander Taco (the ring-leader of Slashdot) himself replied to me and said that it was a foolish expectation: software is 1) written by humans and 2) is far too complex today to be without errors. However, people still judge some operating systems to be more secure than others. The same for kernels. How can such a judgment possibly be made if all software has bugs? The answer is “easily.” We observe the rapidity that bugs are discovered in software, the impact that they have on the IT infrastructure across the world, the speed at which operating system and independent software vendors (OSVs and ISVs) release patches, and how easily those patches are applied without affecting the rest of the server platform, and then we judge the security of a piece of software. Therefore we do not judge a piece of software to be secure because it contains no bugs, but rather by the history of its imperfections and how quickly blemishes are removed.

Notice that I did not say whether or not I agreed with Mr. Slashdot. I do not. I do believe that software designed be generally purposed, such as today’s OSes, is doomed to be bug-laden, simply because it lacks a specific purpose and too many conditions have to be accounted for. However, imagine if the same leeway were given to the software that runs our air-traffic control systems? Or military installations? Such software is held to a higher standard, and it can be in part because it is designed with a specific purpose. The same is true of hypervisors: they are designed specifically for one purpose. They do not yet have all the cruft and bloat sitting on top of them that today’s OSes do. Here’s hoping that the ISVs producing today’s leading virtualization solutions step up to the challenge.

In short, I believe that there is a reasonable expectation that hypervisors will be a lot more secure than general purpose applications written on top of general purpose OSes could ever hope to be.

x86 hardware
Yes, the x86 instruction set was never designed to be virtualized, but to say that the instruction set has not grown well above and beyond its original intentions is to do an injustice to the original minds at Intel who took part in creating one of the most persevering pieces of technology to date. With the first set of virtualization extensions, those created to solve the problems of ring compression and ring aliasing, the x86 instruction set was given a breath of new life. And with the latest extensions, enabling live migrations of virtual machines across multiple generations of processor versions, the staying power of the x86 instruction set in a world with virtualization has been increased even further.

My point is simply that just because the x86 instruction set was not designed with virtualization in mind does not mean that it cannot work, and work securely. That is the beauty of x86: it can be extended to do what we need it to do. History has spoken.

Conclusions
With both of the original arguments shown to be false, is the conclusion of the original argument then reversed? Not completely. While virtualization does not decrease security, the potential for it to do so is there. Hypervisors are software, and although they are a lot less likely to have bugs than a general-purpose piece of software, bugs can still occur. However, to blanketly state that virtualization decreases security is far too general as there are many different implementations of virtualization. For example, if VMware ESX was found to have a memory sharing bug that allowed one virtual machine to read and write the memory of another, does this mean that XenSource is immediately compromised? Of course not. So even when a bug in a hypervisor is found it does not immediately mean that all virtualization is suddenly subject to the same problem.

As I stated earlier, the security of any software package is judged by the number of bugs historically observed, their impact (potential or real), and how quickly the parties responsible for said software fix the bug. While the potential is there, it is far too soon to observe whether or not virtualization decreases security. Only time will tell.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: