The Virtualization Room


June 14, 2007  4:26 PM

Nat Friedman: Swing toward desktop virtualization favors Linux

Jan Stafford Jan Stafford Profile: Jan Stafford

In the past six months, every single IT exec or manager who discusses Linux desktops in a corporate setting with Nat Friedman asks about thin-client enviroments. That’s why Friedman — co-creator of the open source Ximian desktop and open source strategies CTO for Novell — predicts that desktop virtualization is going to take off faster than anyone has anticipated, and Linux desktops adoption is going to increase rapidly as a result.

“The pendulum is swinging back, and there’s an interest and need to centralize data for security reasons. IT managers and corporate execs don’t want people to walk out with laptops holding, say, millions of Social Security numbers.

Centralizing desktop management via virtualization and thin clients holds the much-desired promise of easier management, Friedman told me in a recent conversation.

There’s a desire to have lower-cost manageability by having all your applications running centrally and making thin clients into dumb terminals. Virtualization plays a role there, because on the server you could host thousands of desktops and virtualize those sessions so they’re all isolated from one another and run on an operating system that’s transparent to users. Or, you can use multiple desktop apps running on multiple operating systems. You can have computers running OpenOffice, Firefox, Microsoft apps and so on all this playing onto a single thin client. Virtualization makes it possible to dynamically allocate the resources for that. The desktop itself running virtualization locally developers do that. If you run Linux primarily and you want to run Windows for one app, virtualization is one way to get at that.”

In a virtual desktop setting, Friedman concludes, IT managers will be able to choose best-of-breed, easiest-to-manage and lowest-cost applications and operating systems. He thinks Linux and the desktop applications that run on that platform will gain from this interoperability.

I agree with Friedman’s views on how quickly desktop virtualization will be adopted. My team has been surprised by the number of IT managers who’ve expressed keen interest in moving forward with projects. I do think Linux will gain some users from this trend, but I think the key stumbling block will be getting IT shops to evaluate Linux-based desktop apps in the first place. Historically, they’ve taken the easy route, Windows and Microsoft apps.

What do you think? Let me know via your comments or an email to me at editor@searchservervirtualization.com.

For more of Friedman’s views on the desktop marketplace, check out this post on SearchEnterpriseLinux.com.

June 14, 2007  3:52 PM

Virtualization and next-generation grids: What’s really NG? What’s just a fad?

Jan Stafford Jan Stafford Profile: Jan Stafford

Does a grid by any other name smell as sweet? In years of covering grid computing technologies, I’ve seen the definition of “grid” changed to fit vendors’ products or the computing flavor of the month.

In general, I see the most basic function of grids is creating virtual communities of servers, applications and users. (Let me know if you see it otherwise.)

So, when I heard about virtualized service grids, I wondered if the “virtualized” moniker just get added because virtualization is hot right now? Or, is this a real next-generation grid model. Well, there’s a lot of activity in this space, as I’ve seen when reading Virtualization and Grid Computing blog, which has been a great resource for me. I see, too, that endors seem to be hopping on board. For instance, on the Inside HPC blog, I read that grid vendor United Devices is pursuing creation of virtualization products.

Recently, I asked Ash Massoudi, CEO and co-founder of NextAxiom, a virtualized service grid technology firm, some basic questions about virtualized service grids. Here’s an excerpt from our exchange:

What’s the difference betweeen traditional grids and virtualized service grids?

Massoudi: “The first difference is in programming models used by each. In traditional grid computing, it becomes a programmer’s responsibility, through the use of a dedicated library, to build an application that is designed to run on the grid. So, the programming model requires programming to the grid. In a virtualized service grid, software business and integration components are assembled using a Service-Oriented-Programming (SOP) technique that requires zero-knowledge of the computer resources. The application developer doesn’t need to explicitly identify the load and how it is allocated or to create work units accordingly. Each business or integration component is a service (implicit work unit) that can be composed of other services. The same Service Virtual Machine (SVM) that runs the final application will transparently externalize and distribute the service load across all available computer resources.

“Another difference is that service grid virtualization has a built-in concept of application multi-tenancy and thus favors scaling-up, through multiple-cores, over scaling out as is common with traditional grid computing.”

Why should IT managers take a look at service grid virtualization? What benefits can it bring to their companies?

Massoudi: IT managers should consider service grid virtualization since it reduces TCO across human capital as well as machine resources. Also, the business and integration services that are programmed and virtualized on the service grid provide a way to directly tie their efforts to the tremendous business value that they are creating.

What type of company would use service grid virtualization?

Massoudi: “You need significant IT expertise to run and operate a Virtualized Service Grid (VSG). Large enterprises who already operate data centers and need composite and flexible applications across their existing legacy systems should think of owning and operating their own service grid.”

What type of IT infrastructure is a good fit for service grid virtualization, and for what apps is it appropriate?

Massoudi: Multi-core processor architectures like the dual-core Intel Itanium 2 processor provide the most cost-effective and efficient foundation for Virtualized Service Grids. The more tenants you can run on a single machine the higher the efficiency of the service grid. Service grids are most suited for creating any composite business application or business process that needs to integrate across departmental application silos or enterprises.

My research continues, as does the job of separating the wheat (real technologies) from the chaff (vendor hype). If you’re involved with virtualized service grids — either as a user or developer — or other next-generation grid models, please comment here or write to me at editor@searchservervirtualization.com.


June 13, 2007  2:38 PM

Parallels Server

Jan Stafford Joseph Foran Profile: Joe Foran

While browsing another blog, the famous virtualization.info, I came across a very interesting story of Parallels making an alpha code release of it’s new server-based product. As I mentioned in a slightly-off-topic post, my ears are perked because of the interest the Coherence has generated, with it’s seamless (almost Citrix-y) windows into the guest OS.

I’m really hoping to see what Parallels does with Coherence on the server level. While there are a plethora of ways to administer a heterogeneous server environment (ssh, rdp, vnc, mmc, e-i-e-i-o), Coherence in the mix of remote administration is an interesting proposal. How much further can it be taken – can it, instead of being host-based, become central-management-server based? Picture how Virtual Center allows remote administration of VMware virtual guest, from the virtual machine settings to the guests’ interfaces, plus all of the other settings involved. Add in a ONE-SCREEN management interface, with everything packed off to a Coherence Manager, and imagine how much simpler things can become. Application management tools that don’t work well over remote sessions, direct access to ini/conf/whatever files on a server without extra steps to get there, an organized toolset for administration that makes the mmc look tired… very interesting stuff.

Taking it to the next point, virtual desktops… Parallels supports DirectX and OpenGL (so does VMware’s Fusion, but I liked the beta of that much less than Parallels Desktop after putting them both through the ringer). That support makes VDI a lot close togetting over the hump of multimedia issues that bar it’s large scale adoption. Just as Citrix and other thin clients never reclaimed the desktop over PCs, I don’t doubt that virtual desktops will remain a niche market. I do think, however, that remote-coherence has the same opportunity as Citrix’s ICA (and competitors products as well) to be an excellent value-add for remote application deployment, right up to and including a full desktop. As it stands now, we have a number of users here who have very old applications that don’t work well under Windows XP, yet they just can’t go away (some are government-mandated apps), so we use VMware Player to dish them up in a virtual OS. I’d like to use a Coherence-like product instead, to eliminate a lot of the headache associated with end-user education and change management. To take it one logical step further, I’d like to use a Coherence-like server-based product, to keep those virtual machines off the local desktops and under my department’s management and deployment. If it means buying an Apple XServe or two to support it, so be it. We’re a mixed Windows, Linux, and BSD shop as it is, so that wouldn’t be a big deal in overhead and support. I imagine that’s the case in many environments.

I’m hoping for a sneak peak, being the Parallels geek that I am.


June 4, 2007  4:41 PM

VMware ESX configuration, cost problems spur user’s switch to Microsoft Virtual Server

Jan Stafford Jan Stafford Profile: Jan Stafford

Improved configuration and lower costs are on John E Quigley II’s wish list for VMware ESX. In the meantime, Quigley is moving ESX to the back burner.

Today, Quigley — a senior network engineer — told me that his firm, Total Quality Logistics, LLC, will be migrating over to Microsoft Virtual Server 2005 R2, over the next 45 days. TQL — up to now a VMware shop — will only use ESX in the lab. ESX is too expensive to upgrade and requires more training and resources than TQL can deliver, Quigley said.

“ESX is an good product, but we are seeing issues with the configuration. And replication inside of VMware has always had issues. I have found problems with RPC communications, which affects Exchange and Active Directory. I found the issues during a prototyping exercise with Microsoft and some black belts a few years ago.”

At that time, Microsoft actually recommended using VMware to prototype the network configuration for an Exchange 2000/2003 rollout. Microsoft provided “a bunch of work-arounds for the known issues with the RPC issues.” He’s since replicated this environment with Virtual Server and “never saw any issues that they were apparent with VMware.”

“We are running several SQL servers sessions on ESX, and performance is not what we expected. I have created a new Virtual Server session for a new SQL 2005 server requirement, and it is outperforming the ESX session hands down. With ESX, we can’t easily import or export sessions, and a key lib file has died, and we are getting errors (on) all Internet searches…The only fix is to re-format and install ESX from scratch. It is locking the sessions up, and currently we can only admin from the Web console, as the management application crashes the ESX server.”

Quigley finds using Virtual Server simple and straightforward and gets plenty of documentation on configuration and tuning from Microsoft.

“Microsoft is willing to support many of the services that we are running under VS, not VM. Microsoft is willing to assist with consulting and services to better implement the product. The ROI on Microsoft is much better.”

Quigley says he’s pleased with Microsoft products overall, particularly the performance and troubleshooting capabilities of Microsoft AD/Exchange/SQL performance and troubleshooting

“I have been running the product since the Alpha, and really like the results that I am seeing. With the beta of what they are offering on the Longhorn, I see a lot of promise.”

After discussing this subject with John, I checked out Scott Lowe’s
How to Provision VMs Using NetApp FlexClones. Scott has some great info on working with ESX in Windows environments on his blog; but I didn’t find anything relating directly to John’s problems.

An article on SearchDataCenter.com, however, discusses the fact that “backing up VMware’s ESX Server is a fairly clunky process” and some of the third-party companies — like NSI Software Inc., Asigra Inc. and Vizioncore Inc. — that are releasing products that assist in replication on ESX.

Is anyone out there sharing John Quigley’s experiences with ESX configuration and cost problems or having no problems with those issues at all? Let me know by commenting here or writing to me at editor@searchservervirtualization.com.


May 29, 2007  6:02 AM

ClearCube is VMware’s latest OEM

Alex Barrett Alex Barrett Profile: Alex Barrett

PC blade manufacturer ClearCube has become the first non-server vendor to OEM VMware’s ESX hypervisor, which it will sell to customers implementing virtual desktop infrastructure (VDI).

As indicated in an article this April, ClearCube will sell ESX on a per user basis, rather than per host. This makes it cost-effective to run fewer virtual desktop sessions per blade, explained Tom Josefy, ClearCube director of product management. With this arrangement, IT managers can “guarantee a great end user experience but don’t need to have 30+ users per server to amortize the cost of ESX,” he said.

Josefy said ClearCube expects customers to be able to run up to about 12 VDI sessions on one of its R2200 PC blades, for a list of about $250 per seat including support. “At higher numbers it’s at least a 50% reduction in cost per seat,” he said.

Josefy also weighed in on Microsoft licensing for virtual desktops. Thus far, two licensing models have emerged. With Windows XP, the EULA requires each user to have a “unique set of bits” in the form of a full packaged product, he said. Microsoft’s Vista Enterprise Centralized Desktop (VECD) model, on the other hand, charges “per access device, per year” and is only available to Microsoft Software Assurance customers, typically large enterprises.

Small companies, Josefy said, “will like the Windows XP full packaged product,” even though “they do have to worry about when XP goes end of life.” Large companies, on the other hand, “are looking at the [VECD] program because it’s simple [for them,]” he said.

Of course, VECD requires that they move to Vista, which in ClearCube’s testing, has been shown to consume more resources. How much more? Josefy couldn’t say yet. “I don’t know it it’s a 10% penalty, a 20% penalty, or what.”


May 27, 2007  9:43 PM

Who’s using Microsoft Virtual Server and not VMware?

Jan Stafford Jan Stafford Profile: Jan Stafford

“Who’s using Microsoft Virtual Server or any Microsoft platform for virtualization?” That question has been asked in at least one session of the many conferences I’ve covered this year, and every time one lone user has raised his hand. I’ve talked to those users, and every one only uses Microsoft Virtual Server 2005 R2 for a few virtual machines.

By contrast, almost every hand goes up when asked, “Who’s using VMware?” The others usually say they’re trying out Xen variants.

Rather than making me play “find the user in the haystack,” I asked SearchServerVirtualization.com’s readers — all IT professionals — to write to me about their Virtual Server experiences.

As Andrew Dugdell noted on his blog, I offered a $5 coffee card to respondents.

About a dozen IT managers responded. For the most part, IT managers are running Microsoft Virtual Server in a limited way, just in a few VMs. Two consultants had clients using VS in production. 

Most are using VS for testing and evaluation of products and not in production. Other than that, some interesting uses for Virtual Server were cited, including using virtualization and VS to run old 32-bit operating systms and applications on 64-bit hardware; providing an environment for quicker, less hardware-bound disaster recovery; using VS to run Linux-based spam filters.

Some had tried VS and turned away from it. One said that “Microsoft is woefully out of touch in not providing USB support for their virtual technology. VMWare Workstation…provides all the connectors and hookups I need.” Another said that getting support from Microsoft is a “chore”. Also, he said, the Virtual Server Web interface didn’t work well, Microsoft didnt’ make release 2 of VS a free update “as they had promised” and it ran slower than VMware GSX Server.

We’ll be posting more responses in the comments for this post. I’d also like to hear more Virtual Server stories, either in comments below or via email at jstafford@techtarget.com. Sorry, folks, the coffee cards have all been taken.

Looking for more about Virtual Server’s pros and Cons? I ran into an interesting conversation on Andrew Connell’s blog, where readers responded to his plea for their experiences with VMware versus Microsoft virtualization products.


May 21, 2007  11:27 AM

I’m Your Turbo-Memory, Tell Me There’s No Other…

Jan Stafford Joseph Foran Profile: Joe Foran

Ok, if you got that joke, you either are or were a long-haired hessian from the 80’s, just like I was. First off – I’m sorry I’ve been silent for so long. I’m buying a house, my wife and I are expecting again, and I’m hiring staff as well as kicking off LOTS of real projects at work. Anyway, with that, allow me to start the blogging again!

I just came back from the Intel Premier IT Professionals session in NYC, and while it was geared largely on the desktop space (apparently the Fall event will focus more on Servers), they spent some time covering virtualization and the new hardware coming out to support virtualization. The agenda covered Intel’s VT extensions that help improve system virtualization performance, and is such a key component of Virtual Iron’s and other Xen-based products. Without this (or AMD’s equivalent), there could be no way of making Microsoft operating systems run on Xen hypervisors. Intel also covered my grrr-item of the year – Windows Vista’s virtualization-friendly license that is friendly only for the Enterprise Edition, but I’ll grrr on that elsewhere. My favorite item of this year’s new hardware – Turbo Memory.

For those unfamiliar with Turbo Memory, it’s best described as this – picture a flash (NAND) drive that sits between your regular hard drive(s) and your CPU/Motherboard/RAM. There it acts as a cache for frequently used data (kind of like a CPU’s cache) and helps offload read/write to your hard disks, thereboy mitigating one of the last real bottlenecks in the architecture of modern PC-based systems. From my understanding, TM is tied to Vista’s ReadyDrive system for full functionality, but that will only last for so long before the concept moves into competitive production and other vendors figure out how to detach the TM concept from Vista and make it as invisible as normal hard disk cache. It hasn’t yet hit the server chips, but is expected to by this time next year. From a server virtualization point-of-view, this is important since disk I/O is one of the biggest problems with getting a large physical-to-virtual machine ratio. As each virtual machine is accessed, it calls for disk access to it’s virtual disks, as these requests go to the hardware via the hypervisor and/or host OS, they queue up, slowing down performance. The Turbo Memory concept is one that can be applied to help mitigate this problem. As it stands, I may just get a Vista Desktop with dual disks and Turbo Memory, load it up w/ 4GB RAM, and throw in a huge number of VMs via VMware Server and then via Virtual PC. Throw in some load simulation scripts, perfmon logging and a little elbow grease, and I might have some interesting numbers to show against an entry-level server running W2K3 and similar hardware. The test won’t be worth much on the books, since Vista’s got client-related limitation that make it an inefficient platform (10 connections, anyone?) for hosting virtual machines, but for the fun of it and for the raw numbers that show what Turbo Memory can do, it’ll be worthwhile.

While there I also met the Regional Director for Virtual Iron, who I’m going to be following up with to see about getting my Virtual Iron demo rolling.

Going back to my “poker-based” review system, I give turbo memory’s concept a solid 9 pokers.


May 17, 2007  1:29 PM

Microsoft’s Neil talks virtualization futures at WinHEC

Alex Barrett Alex Barrett Profile: Alex Barrett

I just returned from Microsoft’s WinHEC conference in Los Angeles, where I traveled in hopes of getting some juicy virtualization news. Alas, Microsoft dropped its virtualization bombshell last week – that it will eliminate key features from the Windows Server Virtualization beta that will ship with Longhorn — so no news was to be had, per se.

But even if Microsoft doesn’t have much to show in the way of shipping, competitive hypervisor, one thing I came away with is that the company is clearly paying extremely close attention to virtualization, and has its best minds on the job.

Speaking in a session called Virtualization Technology Directions, Mike Neil, Microsoft general manager in the Windows Server Group, said virtualization stands to be one of the key “scale-up” apps to run on a next generation of servers featuring multi-core processors, and thanks to 64-bit operating systems, terabytes of memory.

One positive aspect of being late to market with Viridian is that Microsoft can develop it to take advantage of hardware virtualization assist technologies going in to chips — CPU virtualization in Intel-VT and AMD-V, but also new features like MMU, DMA remapping, interrupt handling and I/O virtualization. “The software layer for virtualization doesn’t go away, but we do see the hardware taking on some capabilities that will make the virtualization layer thinner — and hopefully more robust as well,” Neil said.

Neil, as a former employee at Connectrix, the first virtualization company Microsoft acquired, recalled how “we didn’t have any hardware assist, and spent a lot of our time working around the limitations of x86 that made it very difficult to virtualize.” VMware, I’m sure, was in the same boat.

Taking advantage of hardware assist capabilities is what Microsoft calls “enlightenment,” which Neil further described as “an intimate arrangement between the kernel and the hypervisor.” But don’t think Microsoft wants to invite everyone in to this cozy party. “We’re not trying to drive this as a standard,” Neil said. Other companies, notably VMware, are espousing paravirt-ops, but not Microsoft.

That’s not to say that hardware assist solves everything — it doesn’t, not by a long shot. Some of the challenges Neil called out include the shift to network storage, increased data rates and ever-expanding storage requirements. When it comes to memory and I/O bandwidth, “there are changes that are going to need to occur,” Neil said. Today, “the straw that we’re sipping the data through is too thin.”

Another area of concern Neil cited is security. “It’s unfortunate that we’re in a situation where malicious software is a business. Hackers aren’t doing this because they are pranksters or its fun, but because they make money off it.” It stands to reason, therefore, that “the hypervisor, as the lowest level piece of software on a system,” is a place where people are going to look to compromise a system. “If I can get at the hypervisor, I can get at all the VMs,” Neil said. Furthermore, today, virtual machines do not know if they’ve been “hyperjacked,” he said; “understanding the layer of software beneath [the VM] will become more and more important.”


May 17, 2007  12:16 PM

Big virtualization win for Mac OS X: Kutz’s prediction comes true

Jan Stafford Jan Stafford Profile: Jan Stafford

On May 31, 2006, University of Texas-Austin IT systems analyst Andrew Kutz made a prediction: People will soon be running Windows side-by-side with Mac OS X with no difference in the application space.

“It came true this year with Parallels,” said Kutz (who just joined Burton Group as an analyst) to me recently.

Kutz’s prediction appeared in Virtualization, like string theory, can be saved from its hype, one of his first columns for SearchServerVirtualization.com.

Parallels isn’t the only supporter of Mac OS X virtualization in town. On his blog, Kimbro Staken — CTO of JumpBox Inc. — discussed the other players in this space, saying:

“VirtualBox is a new entry in the virtualization space and is particularly interesting because it has been Open Sourced under the GPL license. This makes the Mac OS X virtualization space a three way race with Parallels, VMWare Fusion and now VirtualBox all having offerings available. Parallels is still the clear leader thanks to its head start and solid Windows integration, but the competition is definitely heating up.”

Kutz scored on this prediction, but I think his call on the outcome of this development is off base. First, let’s look at what he wrote. The column starts out with a bang:

“This article will show that, just as Dr. Edward Witten saved string theory by condensing many efforts and ideas into one elegant theory, Mac OS X is poised to do the same for virtualization by fusing the many implementations of virtualization into one practical and marketable consumer product.”

He doesn’t finish with a whimper:

“Apple is in the best position to become the new leader in a world of consumer virtualization. And they will do so with style, simplicity and elegance.”

Like most Mac enthusiasts, I think Kutz is over-optimistic. I don’t think virtualization, even via an open source product like VirtualBox, will push Apple out of its niches in the consumer market. Some power users — particularly in the graphics, video and music fields — will take advantage of the opportunity to run Mac OS X on commodity hardware; but mainstream users aren’t going to bother.

On the business side, corporate graphics departments will like this development, and their IT managers will enjoy the cost savings of not having to buy separate boxes for those folks.


May 15, 2007  2:57 PM

A bunch of useful VMware how-to guides

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

Hey all,

Since people are loving the useful links page Andrew Kutz pointed out in his last blog post, I thought I’d also toss in our own. This page, Fast guide: VMware how-tos includes advice on: how to install VMware on Linux, VMDK conversions, VMotion, how to install VMware on Windows, guest OS performance tips, VCB script additions, using .NET with the VI3 SDK, VMware Player, VMware ACE and more.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: