The Virtualization Room

June 18, 2007  2:34 PM

Linux users: Xen, VMware, or Virtual Server?

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

In Monday’s newsletter column, I included a question to our Linux users: Do you prefer Xen, VMware or Virtual Server, and why?

It’s only Monday afternoon, but I’ve gotten some interesting responses. Chris, the CIO for Oxford Archaeology: Explorying the Human Journey, wrote:

In response to your question, we prefer VirtualBox, which offers a degree of flexibility that only VMware VI3 gets close to. Without the entry costs! We currently are working with VMware server, and a lack of a Linux client for VMware VI3, along with its MS SQL dependency, prevented a planned migration to VI3. VirtualBox is the young upstart on the block; the list of features that it is currently lacking in comparison with ESX grows shorter at an alarming rate, it is cross platform, independent of hardware extensions (but can benefit from them), high performance, and remarkably quick to get to grips with.

David of Code No Evil, LLC wrote:

I prefer VMware because it’s a non-free commercial product with support.  Microsoft, for example, doesn’t even list in their support site VPC 2K7 as a product.  As for Xen, I’m rarely a proponent of the OSS community.  As for VMware, my current support case just became a known bug # 154399.  Nice to know that VMware was willing to admit a fault in their platform and intends on fixing it.

I asked him for clarification on the bug. Here’s what he said:

I am running Vista x64 on a Mac Pro.  My intent was run XP off the hard drive from my old machine (a Dell Precision 340) in a USB enclosure using VirtualPC 2K7.  VPC crashed every time I attempted to access the virtual drive (mapped to the physical drive).  Support is non-existent for VPC 2K7 because Microsoft doesn’t even list it as a product at the support website.  I even reached out to the  “Virtual PC Guy”0, but he was no help either.  At this point, I figured that I should try VMware Workstation.  At least if it didn’t work, I could open a support incident and I’d get some help.  Well, long story short, there is a permissions issue that despite going back and forth with VMware tech support (in India none-the-less) was irresolvable even in VMware Workstation.   The support overall was not bad.  A few times I had to send an extra email to get them to wake up, but all-in-all it was satisfactory.  The rep even called me because the issue became too difficult to talk about on the phone.  Now, the real test is to see how long it will be before a fix is released.  I would gather that it will  be soon because this bug precludes anyone from using a VMware virtual drive instance mapped to a physical drive on Vista.  I would, as a developer, classify this as critical defect.

Richard of OnX Enterprise Solutions Inc. wrote in suggesting Virtuozzo.

Chris, a system architect, wrote in with his preference for Xen:

I prefer Xen as it’s free on Red Hat 5 or SuSE 10 for Linux environments.  EMC ESX rocks though if customers can afford with its small Linux Red Hat kernel and the various tools for both Linux and Windows environments.  MS Virtual Server is better for test labs and with Microsoft platforms.  I had a very bad experience with MS Virtual Server and NetWare systems although that’s another OS.

Possibly with improvements in MS Virtual Server there might be a point where if it’s free and if MS really does support Red Hat and especially SuSE underneath it that if it’s free with MS licenses that it could move up within the server marketplace.

More to come. In the meantime, what are your thoughts, readers?

June 18, 2007  11:06 AM

Virtualization community responds to ESX Lite rumor

Alex Barrett Alex Barrett Profile: Alex Barrett

In case you missed it, reported last week that VMware is developing an embedded “ESX Lite” hypervisor. And while VMware may have opted not to comment about it, the virtualization community has plenty to say on the topic. For one, Bob Plankers, over at The Lone Sysadmin thinks that ESX Lite could save him money on server hardware:

So you have an ESX server that doesn’t need local disk. That saves you $300 for a RAID controller and about $300 per 15K RPM 146 GB disk. For my RAID 1 + hot spare configurations that’s $1200. No moving parts equals theoretical better reliability, though flash drives have a limit to the number of read/write operations they can do over their lifetime. Also very little power consumption, and very little heat. Without all the extra heat from the disks you can reduce the number of fans in the chassis, which further reduces the price and power draw.

I for one, totally agree with this assessment. Spinning disk drives inside a server are a major bummer. Since the vast majority of ESX instances are already SAN-attached, why not go all the way and ditch the internal boot drives?

The flipside, said Fred Peterson, a system administrator writing on the VMTN message board, is that an ESX Lite appliance could not be reused like general purpose hardware:

Once it becomes “out dated” it has to be tossed, you wouldn’t be able to re-use as a test windows box or linux box or something. While not a bad thing, its life span to justify the upfront cost would have to be pretty good.

Over at MindSecure, a blog about “information security, virtualization, application delivery and storage,” ESX Lite is paired with Citrix Ardence, an OS streaming application, to positive effect.

Embedding ESX Lite in the hardware and using Ardence to stream the operating system would allow for complete hardware abstraction at the server and desktop level as well as the ability to remove spinning disk from servers and desktops, use solid state storage strictly on these devices, reduce storage utilization by using Ardence shared images, reduce cooling costs in the data center by using less disk, and many other advantages which these two solutions provide when paired together.

Scott Lowe on his blog says that ESX Lite has interesting competitive implications:

It’s smart because it derails Microsoft’s attempts to marginalize the hypervisor by bundling it with the operating system (via Windows Server Virtualization, aka “Viridian”). It’s smart because it expands the hypervisor market in new directions that no one else has yet tapped, helping VMware retain mindshare about its technical leadership and innovation. It’s smart because it’s the hardware vendors that have the most to lose via virtualization, and by partnering with them you remove potential future opponents.

But it’s a strategy that his his risks, he points out, namely, if the embedded hypervisor doesn’t perform as well as regular ESX, or if VMware loses visibility by going too deep under the hood.

Meanwhile, rumor has it the the original story has some inaccuracies in it, but like the old advertising saying (“I know half my advertising dollars are wasted – I just don’t know which half!” ), without official word, I can’t speculate as to what’s right and what isn’t. An obvious possibility is that Dell is not participating with ESX Lite, or that the effort is not limited to just Dell. My gut tells me the latter is closer to the truth. Any thoughts are appreciated.

June 14, 2007  4:26 PM

Nat Friedman: Swing toward desktop virtualization favors Linux

Jan Stafford Jan Stafford Profile: Jan Stafford

In the past six months, every single IT exec or manager who discusses Linux desktops in a corporate setting with Nat Friedman asks about thin-client enviroments. That’s why Friedman — co-creator of the open source Ximian desktop and open source strategies CTO for Novell — predicts that desktop virtualization is going to take off faster than anyone has anticipated, and Linux desktops adoption is going to increase rapidly as a result.

“The pendulum is swinging back, and there’s an interest and need to centralize data for security reasons. IT managers and corporate execs don’t want people to walk out with laptops holding, say, millions of Social Security numbers.

Centralizing desktop management via virtualization and thin clients holds the much-desired promise of easier management, Friedman told me in a recent conversation.

There’s a desire to have lower-cost manageability by having all your applications running centrally and making thin clients into dumb terminals. Virtualization plays a role there, because on the server you could host thousands of desktops and virtualize those sessions so they’re all isolated from one another and run on an operating system that’s transparent to users. Or, you can use multiple desktop apps running on multiple operating systems. You can have computers running OpenOffice, Firefox, Microsoft apps and so on all this playing onto a single thin client. Virtualization makes it possible to dynamically allocate the resources for that. The desktop itself running virtualization locally developers do that. If you run Linux primarily and you want to run Windows for one app, virtualization is one way to get at that.”

In a virtual desktop setting, Friedman concludes, IT managers will be able to choose best-of-breed, easiest-to-manage and lowest-cost applications and operating systems. He thinks Linux and the desktop applications that run on that platform will gain from this interoperability.

I agree with Friedman’s views on how quickly desktop virtualization will be adopted. My team has been surprised by the number of IT managers who’ve expressed keen interest in moving forward with projects. I do think Linux will gain some users from this trend, but I think the key stumbling block will be getting IT shops to evaluate Linux-based desktop apps in the first place. Historically, they’ve taken the easy route, Windows and Microsoft apps.

What do you think? Let me know via your comments or an email to me at

For more of Friedman’s views on the desktop marketplace, check out this post on

June 14, 2007  3:52 PM

Virtualization and next-generation grids: What’s really NG? What’s just a fad?

Jan Stafford Jan Stafford Profile: Jan Stafford

Does a grid by any other name smell as sweet? In years of covering grid computing technologies, I’ve seen the definition of “grid” changed to fit vendors’ products or the computing flavor of the month.

In general, I see the most basic function of grids is creating virtual communities of servers, applications and users. (Let me know if you see it otherwise.)

So, when I heard about virtualized service grids, I wondered if the “virtualized” moniker just get added because virtualization is hot right now? Or, is this a real next-generation grid model. Well, there’s a lot of activity in this space, as I’ve seen when reading Virtualization and Grid Computing blog, which has been a great resource for me. I see, too, that endors seem to be hopping on board. For instance, on the Inside HPC blog, I read that grid vendor United Devices is pursuing creation of virtualization products.

Recently, I asked Ash Massoudi, CEO and co-founder of NextAxiom, a virtualized service grid technology firm, some basic questions about virtualized service grids. Here’s an excerpt from our exchange:

What’s the difference betweeen traditional grids and virtualized service grids?

Massoudi: “The first difference is in programming models used by each. In traditional grid computing, it becomes a programmer’s responsibility, through the use of a dedicated library, to build an application that is designed to run on the grid. So, the programming model requires programming to the grid. In a virtualized service grid, software business and integration components are assembled using a Service-Oriented-Programming (SOP) technique that requires zero-knowledge of the computer resources. The application developer doesn’t need to explicitly identify the load and how it is allocated or to create work units accordingly. Each business or integration component is a service (implicit work unit) that can be composed of other services. The same Service Virtual Machine (SVM) that runs the final application will transparently externalize and distribute the service load across all available computer resources.

“Another difference is that service grid virtualization has a built-in concept of application multi-tenancy and thus favors scaling-up, through multiple-cores, over scaling out as is common with traditional grid computing.”

Why should IT managers take a look at service grid virtualization? What benefits can it bring to their companies?

Massoudi: IT managers should consider service grid virtualization since it reduces TCO across human capital as well as machine resources. Also, the business and integration services that are programmed and virtualized on the service grid provide a way to directly tie their efforts to the tremendous business value that they are creating.

What type of company would use service grid virtualization?

Massoudi: “You need significant IT expertise to run and operate a Virtualized Service Grid (VSG). Large enterprises who already operate data centers and need composite and flexible applications across their existing legacy systems should think of owning and operating their own service grid.”

What type of IT infrastructure is a good fit for service grid virtualization, and for what apps is it appropriate?

Massoudi: Multi-core processor architectures like the dual-core Intel Itanium 2 processor provide the most cost-effective and efficient foundation for Virtualized Service Grids. The more tenants you can run on a single machine the higher the efficiency of the service grid. Service grids are most suited for creating any composite business application or business process that needs to integrate across departmental application silos or enterprises.

My research continues, as does the job of separating the wheat (real technologies) from the chaff (vendor hype). If you’re involved with virtualized service grids — either as a user or developer — or other next-generation grid models, please comment here or write to me at

June 13, 2007  2:38 PM

Parallels Server

Joseph Foran Profile: Joe Foran

While browsing another blog, the famous, I came across a very interesting story of Parallels making an alpha code release of it’s new server-based product. As I mentioned in a slightly-off-topic post, my ears are perked because of the interest the Coherence has generated, with it’s seamless (almost Citrix-y) windows into the guest OS.

I’m really hoping to see what Parallels does with Coherence on the server level. While there are a plethora of ways to administer a heterogeneous server environment (ssh, rdp, vnc, mmc, e-i-e-i-o), Coherence in the mix of remote administration is an interesting proposal. How much further can it be taken – can it, instead of being host-based, become central-management-server based? Picture how Virtual Center allows remote administration of VMware virtual guest, from the virtual machine settings to the guests’ interfaces, plus all of the other settings involved. Add in a ONE-SCREEN management interface, with everything packed off to a Coherence Manager, and imagine how much simpler things can become. Application management tools that don’t work well over remote sessions, direct access to ini/conf/whatever files on a server without extra steps to get there, an organized toolset for administration that makes the mmc look tired… very interesting stuff.

Taking it to the next point, virtual desktops… Parallels supports DirectX and OpenGL (so does VMware’s Fusion, but I liked the beta of that much less than Parallels Desktop after putting them both through the ringer). That support makes VDI a lot close togetting over the hump of multimedia issues that bar it’s large scale adoption. Just as Citrix and other thin clients never reclaimed the desktop over PCs, I don’t doubt that virtual desktops will remain a niche market. I do think, however, that remote-coherence has the same opportunity as Citrix’s ICA (and competitors products as well) to be an excellent value-add for remote application deployment, right up to and including a full desktop. As it stands now, we have a number of users here who have very old applications that don’t work well under Windows XP, yet they just can’t go away (some are government-mandated apps), so we use VMware Player to dish them up in a virtual OS. I’d like to use a Coherence-like product instead, to eliminate a lot of the headache associated with end-user education and change management. To take it one logical step further, I’d like to use a Coherence-like server-based product, to keep those virtual machines off the local desktops and under my department’s management and deployment. If it means buying an Apple XServe or two to support it, so be it. We’re a mixed Windows, Linux, and BSD shop as it is, so that wouldn’t be a big deal in overhead and support. I imagine that’s the case in many environments.

I’m hoping for a sneak peak, being the Parallels geek that I am.

June 4, 2007  4:41 PM

VMware ESX configuration, cost problems spur user’s switch to Microsoft Virtual Server

Jan Stafford Jan Stafford Profile: Jan Stafford

Improved configuration and lower costs are on John E Quigley II’s wish list for VMware ESX. In the meantime, Quigley is moving ESX to the back burner.

Today, Quigley — a senior network engineer — told me that his firm, Total Quality Logistics, LLC, will be migrating over to Microsoft Virtual Server 2005 R2, over the next 45 days. TQL — up to now a VMware shop — will only use ESX in the lab. ESX is too expensive to upgrade and requires more training and resources than TQL can deliver, Quigley said.

“ESX is an good product, but we are seeing issues with the configuration. And replication inside of VMware has always had issues. I have found problems with RPC communications, which affects Exchange and Active Directory. I found the issues during a prototyping exercise with Microsoft and some black belts a few years ago.”

At that time, Microsoft actually recommended using VMware to prototype the network configuration for an Exchange 2000/2003 rollout. Microsoft provided “a bunch of work-arounds for the known issues with the RPC issues.” He’s since replicated this environment with Virtual Server and “never saw any issues that they were apparent with VMware.”

“We are running several SQL servers sessions on ESX, and performance is not what we expected. I have created a new Virtual Server session for a new SQL 2005 server requirement, and it is outperforming the ESX session hands down. With ESX, we can’t easily import or export sessions, and a key lib file has died, and we are getting errors (on) all Internet searches…The only fix is to re-format and install ESX from scratch. It is locking the sessions up, and currently we can only admin from the Web console, as the management application crashes the ESX server.”

Quigley finds using Virtual Server simple and straightforward and gets plenty of documentation on configuration and tuning from Microsoft.

“Microsoft is willing to support many of the services that we are running under VS, not VM. Microsoft is willing to assist with consulting and services to better implement the product. The ROI on Microsoft is much better.”

Quigley says he’s pleased with Microsoft products overall, particularly the performance and troubleshooting capabilities of Microsoft AD/Exchange/SQL performance and troubleshooting

“I have been running the product since the Alpha, and really like the results that I am seeing. With the beta of what they are offering on the Longhorn, I see a lot of promise.”

After discussing this subject with John, I checked out Scott Lowe’s
How to Provision VMs Using NetApp FlexClones. Scott has some great info on working with ESX in Windows environments on his blog; but I didn’t find anything relating directly to John’s problems.

An article on, however, discusses the fact that “backing up VMware’s ESX Server is a fairly clunky process” and some of the third-party companies — like NSI Software Inc., Asigra Inc. and Vizioncore Inc. — that are releasing products that assist in replication on ESX.

Is anyone out there sharing John Quigley’s experiences with ESX configuration and cost problems or having no problems with those issues at all? Let me know by commenting here or writing to me at

May 29, 2007  6:02 AM

ClearCube is VMware’s latest OEM

Alex Barrett Alex Barrett Profile: Alex Barrett

PC blade manufacturer ClearCube has become the first non-server vendor to OEM VMware’s ESX hypervisor, which it will sell to customers implementing virtual desktop infrastructure (VDI).

As indicated in an article this April, ClearCube will sell ESX on a per user basis, rather than per host. This makes it cost-effective to run fewer virtual desktop sessions per blade, explained Tom Josefy, ClearCube director of product management. With this arrangement, IT managers can “guarantee a great end user experience but don’t need to have 30+ users per server to amortize the cost of ESX,” he said.

Josefy said ClearCube expects customers to be able to run up to about 12 VDI sessions on one of its R2200 PC blades, for a list of about $250 per seat including support. “At higher numbers it’s at least a 50% reduction in cost per seat,” he said.

Josefy also weighed in on Microsoft licensing for virtual desktops. Thus far, two licensing models have emerged. With Windows XP, the EULA requires each user to have a “unique set of bits” in the form of a full packaged product, he said. Microsoft’s Vista Enterprise Centralized Desktop (VECD) model, on the other hand, charges “per access device, per year” and is only available to Microsoft Software Assurance customers, typically large enterprises.

Small companies, Josefy said, “will like the Windows XP full packaged product,” even though “they do have to worry about when XP goes end of life.” Large companies, on the other hand, “are looking at the [VECD] program because it’s simple [for them,]” he said.

Of course, VECD requires that they move to Vista, which in ClearCube’s testing, has been shown to consume more resources. How much more? Josefy couldn’t say yet. “I don’t know it it’s a 10% penalty, a 20% penalty, or what.”

May 27, 2007  9:43 PM

Who’s using Microsoft Virtual Server and not VMware?

Jan Stafford Jan Stafford Profile: Jan Stafford

“Who’s using Microsoft Virtual Server or any Microsoft platform for virtualization?” That question has been asked in at least one session of the many conferences I’ve covered this year, and every time one lone user has raised his hand. I’ve talked to those users, and every one only uses Microsoft Virtual Server 2005 R2 for a few virtual machines.

By contrast, almost every hand goes up when asked, “Who’s using VMware?” The others usually say they’re trying out Xen variants.

Rather than making me play “find the user in the haystack,” I asked’s readers — all IT professionals — to write to me about their Virtual Server experiences.

As Andrew Dugdell noted on his blog, I offered a $5 coffee card to respondents.

About a dozen IT managers responded. For the most part, IT managers are running Microsoft Virtual Server in a limited way, just in a few VMs. Two consultants had clients using VS in production. 

Most are using VS for testing and evaluation of products and not in production. Other than that, some interesting uses for Virtual Server were cited, including using virtualization and VS to run old 32-bit operating systms and applications on 64-bit hardware; providing an environment for quicker, less hardware-bound disaster recovery; using VS to run Linux-based spam filters.

Some had tried VS and turned away from it. One said that “Microsoft is woefully out of touch in not providing USB support for their virtual technology. VMWare Workstation…provides all the connectors and hookups I need.” Another said that getting support from Microsoft is a “chore”. Also, he said, the Virtual Server Web interface didn’t work well, Microsoft didnt’ make release 2 of VS a free update “as they had promised” and it ran slower than VMware GSX Server.

We’ll be posting more responses in the comments for this post. I’d also like to hear more Virtual Server stories, either in comments below or via email at Sorry, folks, the coffee cards have all been taken.

Looking for more about Virtual Server’s pros and Cons? I ran into an interesting conversation on Andrew Connell’s blog, where readers responded to his plea for their experiences with VMware versus Microsoft virtualization products.

May 21, 2007  11:27 AM

I’m Your Turbo-Memory, Tell Me There’s No Other…

Joseph Foran Profile: Joe Foran

Ok, if you got that joke, you either are or were a long-haired hessian from the 80’s, just like I was. First off – I’m sorry I’ve been silent for so long. I’m buying a house, my wife and I are expecting again, and I’m hiring staff as well as kicking off LOTS of real projects at work. Anyway, with that, allow me to start the blogging again!

I just came back from the Intel Premier IT Professionals session in NYC, and while it was geared largely on the desktop space (apparently the Fall event will focus more on Servers), they spent some time covering virtualization and the new hardware coming out to support virtualization. The agenda covered Intel’s VT extensions that help improve system virtualization performance, and is such a key component of Virtual Iron’s and other Xen-based products. Without this (or AMD’s equivalent), there could be no way of making Microsoft operating systems run on Xen hypervisors. Intel also covered my grrr-item of the year – Windows Vista’s virtualization-friendly license that is friendly only for the Enterprise Edition, but I’ll grrr on that elsewhere. My favorite item of this year’s new hardware – Turbo Memory.

For those unfamiliar with Turbo Memory, it’s best described as this – picture a flash (NAND) drive that sits between your regular hard drive(s) and your CPU/Motherboard/RAM. There it acts as a cache for frequently used data (kind of like a CPU’s cache) and helps offload read/write to your hard disks, thereboy mitigating one of the last real bottlenecks in the architecture of modern PC-based systems. From my understanding, TM is tied to Vista’s ReadyDrive system for full functionality, but that will only last for so long before the concept moves into competitive production and other vendors figure out how to detach the TM concept from Vista and make it as invisible as normal hard disk cache. It hasn’t yet hit the server chips, but is expected to by this time next year. From a server virtualization point-of-view, this is important since disk I/O is one of the biggest problems with getting a large physical-to-virtual machine ratio. As each virtual machine is accessed, it calls for disk access to it’s virtual disks, as these requests go to the hardware via the hypervisor and/or host OS, they queue up, slowing down performance. The Turbo Memory concept is one that can be applied to help mitigate this problem. As it stands, I may just get a Vista Desktop with dual disks and Turbo Memory, load it up w/ 4GB RAM, and throw in a huge number of VMs via VMware Server and then via Virtual PC. Throw in some load simulation scripts, perfmon logging and a little elbow grease, and I might have some interesting numbers to show against an entry-level server running W2K3 and similar hardware. The test won’t be worth much on the books, since Vista’s got client-related limitation that make it an inefficient platform (10 connections, anyone?) for hosting virtual machines, but for the fun of it and for the raw numbers that show what Turbo Memory can do, it’ll be worthwhile.

While there I also met the Regional Director for Virtual Iron, who I’m going to be following up with to see about getting my Virtual Iron demo rolling.

Going back to my “poker-based” review system, I give turbo memory’s concept a solid 9 pokers.

May 17, 2007  1:29 PM

Microsoft’s Neil talks virtualization futures at WinHEC

Alex Barrett Alex Barrett Profile: Alex Barrett

I just returned from Microsoft’s WinHEC conference in Los Angeles, where I traveled in hopes of getting some juicy virtualization news. Alas, Microsoft dropped its virtualization bombshell last week — that it will eliminate key features from the Windows Server Virtualization beta that will ship with Longhorn — so no news was to be had, per se.

But even if Microsoft doesn’t have much to show in the way of shipping, competitive hypervisor, one thing I came away with is that the company is clearly paying extremely close attention to virtualization, and has its best minds on the job.

Speaking in a session called Virtualization Technology Directions, Mike Neil, Microsoft general manager in the Windows Server Group, said virtualization stands to be one of the key “scale-up” apps to run on a next generation of servers featuring multi-core processors, and thanks to 64-bit operating systems, terabytes of memory.

One positive aspect of being late to market with Viridian is that Microsoft can develop it to take advantage of hardware virtualization assist technologies going in to chips — CPU virtualization in Intel-VT and AMD-V, but also new features like MMU, DMA remapping, interrupt handling and I/O virtualization. “The software layer for virtualization doesn’t go away, but we do see the hardware taking on some capabilities that will make the virtualization layer thinner — and hopefully more robust as well,” Neil said.

Neil, as a former employee at Connectrix, the first virtualization company Microsoft acquired, recalled how “we didn’t have any hardware assist, and spent a lot of our time working around the limitations of x86 that made it very difficult to virtualize.” VMware, I’m sure, was in the same boat.

Taking advantage of hardware assist capabilities is what Microsoft calls “enlightenment,” which Neil further described as “an intimate arrangement between the kernel and the hypervisor.” But don’t think Microsoft wants to invite everyone in to this cozy party. “We’re not trying to drive this as a standard,” Neil said. Other companies, notably VMware, are espousing paravirt-ops, but not Microsoft.

That’s not to say that hardware assist solves everything — it doesn’t, not by a long shot. Some of the challenges Neil called out include the shift to network storage, increased data rates and ever-expanding storage requirements. When it comes to memory and I/O bandwidth, “there are changes that are going to need to occur,” Neil said. Today, “the straw that we’re sipping the data through is too thin.”

Another area of concern Neil cited is security. “It’s unfortunate that we’re in a situation where malicious software is a business. Hackers aren’t doing this because they are pranksters or its fun, but because they make money off it.” It stands to reason, therefore, that “the hypervisor, as the lowest level piece of software on a system,” is a place where people are going to look to compromise a system. “If I can get at the hypervisor, I can get at all the VMs,” Neil said. Furthermore, today, virtual machines do not know if they’ve been “hyperjacked,” he said; “understanding the layer of software beneath [the VM] will become more and more important.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: