The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog


June 26, 2007  3:24 PM

Virtualization Today and Tomorrow



Posted by: cwolf
Chris Wolf, Virtual machine, Virtualization, Virtualization management, Virtualization platforms, Virtualization security, VMware, Why choose server virtualization?, Xen

A couple of weeks ago I spoke with Alex Barrett regarding what I though was a talk on the direction of the server virtualization landscape. Our conversation resulted in her article “Xen virtualization will catch up to VMware in 2008.” After reading the article, I was a little surprised at how some of my words were quoted out of context and wanted to offer my take on the virtualization market and its future direction.

VMware’s Role in Shaping the Future

Many of VMware’s competitors have based their product development road map on VMware’s VI 3 feature set. When I state that Xen platforms can catch-up to VMware’s VI3 features by mid 2008, I mean just that. By this time next year, several Xen vendors will offer mature dynamic failover (comparable to VMware HA) and live migration (comparable to Vmotion) solutions. In doing so, Xen platforms will offer the features that today’s enterprise environments are demanding. Virtual Iron has been very aggressive with their development roadmap and XenSource is working hard as well.

Still, in order to “catch up,” one would have to assume that VMware is sitting on their hands, which of course if far from the case. So will the Xen vendors be caught up to VMware next year? I don’t think so. Will they offer the features and maturity that allow them to be observed as an alternative in the enterprise? Yes.

However, looking into my crystal ball, I see the next generation VMware virtual infrastructure architecture as once again raising the bar. VMware’s ESX hypervisor will have a smaller footprint and improved security. Features that are important in the enterprise, including dynamic VM failover and backup will see significant improvements. You should also to see the complexity of storage integration reduced as well. Technologies such as N_Port ID Virtualization (NPIV) and the proliferation of iSCSI will significantly ease VM storage integration and failover.

I also expect to see more leadership from VMware in the following areas:

  • Virtual network security, including monitoring and isolation
  • Storage virtualization – development of consistent standards and best practices for integration between server and storage virtualization platforms
  • Centralized account management and directory service integration (this is one of my VCB pet peeves)
  • Virtual desktop management

Keep in mind that oftentimes many VMware Workstation features find their way into ESX as well. So you should expect some of the new Workstation 6 features to play a part in the next ESX Server product release.Record/replay, is one of my favorite new features, and has numerous uses for testing, troubleshooting, and security auditing.

As the market leader, we should all expect VMware to continue to provide leadership in virtualization innovation, and I don’t expect that to subside.

Virtualization and Security

Security has been getting much more attention lately and will continue to do so in coming years. My recent article “Virtual Switch Security” outlined some of the current weaknesses regarding Layer 2 traffic isolation in some virtual switches. Virtual switches need to improve their default isolation as well as manageability. Port mirroring is an important feature in virtual switches and will be needed for integration with intrusion detection and prevention systems. However, administrators need to be able to control port mirroring within a virtual switch and in turn enable or disable port mirroring on specific ports as needed. VLAN integration is and will remain a concern for virtual switches and vendors that do not offer 802.1Q VLAN support will remain at a disadvantage.

Intrusion detection is becoming more of a concern for numerous organizations, and the uptake of virtualization support by many security ISVs is evidence of that. For example, Catbird’s V-Agent can be used to quickly add an IDS to existing virtual networks.

Hypervisor security is naturally important as well. If you would like to see some of the issues out there today, take a look at Harley Stagner’s excellent article on preventing and detecting rogue VMs. The blue pill attack has also received considerable interest. For more information on blue pill, take a look at Joanna Rutkowska’s presentation “Virtualization – the other side of the coin.”

The security concerns relating to virtualization are no more scary than what we already see with existing operating systems and applications. While security concerns should not prevent you from implementing virtualization, you cannot ignore security either. Hypervisors and management consoles (such as the ESX console which uses a Red Hat-based kernel) still must be managed and updated like all other server operating systems.

To validate the security of their architectures, you should expect virtualization vendors to obtain EAL certification for their respective platforms.

Standards

At the moment, standards are more on my wish list than an actual prediction. I’m hopeful that we will see a common virtual hard disk format within the next 2-5 years. Doing so could provide virtual machine portability amongst all server virtualization platforms and make it considerably easier for ISVs to package and deploy virtual appliances. Administrators would be free to choose their preferred virtualization platform and run virtualization systems on that platform regardless of the virtualization engine that may have packaged a particular VM.

Management standards would also go far in easing virtualization deployments and management. Common APIs for management and backup would allow any third party management or backup tool vendor to support all major virtualization platforms. With industry support of the DMTF System Virtualization, Partitioning, and Clustering (SVPC) Working Group, realization of standardized virtualization management can become a reality.

Emerging Architectures

Application and OS virtualization, fueled by vendors such as SWsoft, Sun, DataSynapse, and Trigence, will continue to add to the virtualization mix in the enterprise. Down the road, application virtualization can significantly ease application deployment by allowing ISVs to package their applications in virtualized containers, thus far reducing application deployment complexity. These technologies run alongside server virtualization deployments today, and it’s very likely that they may be deployed within server virtualization frameworks down the road.

Much work still remains in aligning the non-virtualized industry with the virtualized world. Both application and OS vendors need to be clear on their virtualization licensing terms, with licensing models that support virtualization that are either based on physical or virtual resources. Hybrid licensing that includes terms for virtualization and restrictions on relocation of VMs to other physical resources impedes virtualization adoptions and adds unnecessary confusion. In 2005 Microsoft added a needed jolt to virtualization by being the first vendor to define product licensing in support of server virtualization. Today they need to go further and set the gold standard for licensing of operating systems and applications inside virtual environments. That model should be clear and concise, with simple terms for virtual machines and without limits on portability. “Buffet” style licensing that provides for unlimited VMs on a physical host is ideal as well. Choices and rules are good, but let’s not get carried away. In terms of licensing, less is more. If Microsoft gives us a simple licensing model, many other industry vendors will follow.

Virtualization’s future holds plenty of promise, and we’ll all be the beneficiaries of that promise.

June 20, 2007  12:41 PM

Virtualization on your iPhone?



Posted by: Alex Barrett
Uncategorized, Virtualization

No, Apple hasn’t made any announcements about virtualization for the iPhone, but all this reporting about VMware ESX Lite jogged my memory of a conversation I had recently with XenSource CTO and founder Simon Crosby. While talking about Xen 3.1, Simon mentioned that since Xen is an open-source project, some developers in the consumer electronics space have adopted it as the basis of an embedded hypervisor for your cell phone or MP3 player.

Yes, even down on a mobile device, virtualization has a role to play. For security reasons, Crosby explained, electronic device manufacturers typically use multiple chips to perform different functions — one processor to run the real-time operating system, another for graphics, a third for sound, and so on. And just like in contemporary servers, all those chips are wildly underutilized. Enter virtualization. By running an embedded hypervisor on a single CPU, every process can run logically isolated from one another, while allowing the manufacturer to cut down on the number of chips in the device.

“I call it PDA consolidation,” Crosby joked.

The benefits of integrating virtualization in to consumer electronics are similar to the benefits IT managers derive from server virtualization: better utilization of hardware equals less hardware. In consumer devices, that translates in to smaller, lighter devices with better battery life and that cost less to manufacture, and therefore, that cost less for consumers to buy. Cool.

This got me curious about who is actually doing this. A simple Google search gave me a couple of leads. Last year, LinuxDevices.com reported on a company called Sombrio Sytems developing a “Xen Loadable Module” (XLM) but the company appears to have gone by the wayside. However, Trango Virtual Processors, based in Grenoble, France, seems to be actively involved in embedded virtualization. According to their web site, just this week, the company announced a version of its TRANGO Hypervisor for the latest generation of ARM processors. With TRANGO, ARM processors gain the ability to run up to 256 virtual processes, executing a “rich operating system (such as Linux or Windows CE), a real-time operating system (RTOS), or a standalone driver or application.” I have no idea how far along they are on this process, or when virtualization-enhanced mobile devices might hit the market, but it certainly sounds promising.


June 20, 2007  10:31 AM

Why the eGenera Patent is Dangerous



Posted by: Joe Foran
Links we like, Virtual Iron, Virtualization, Virtualization security, VMware

Virtualization.info, Gridtoday, the eGenera website, and a lot of other sources reported that eGenera has received a patent for an all-in-one N+1 tiered disaster recovery solution that combines grid technology and virtualization to provide a hardware-neutral Disaster Recovery product that takes your entire data center and encapsulates it. This is an awesome product because it can greatly improve DR and perhaps make DR more accessible to smaller businesses, but it’s not patent-worthy. It smacks of a way to stifle competition and generate revenue via patent suits over product sales. Or it may just be that patent itself is just pointless. I’m not sure which case is more true, honestly. It all depends on if it’s challenged, and how.

DISCLAIMER: I’m not a lawyer. In fact, I don’t ever even want to be a lawyer. I’m happy as an IT Director and SysAdmin, and don’t want to ever be a source of legal advice. The below is informed opinion, not legal advice. Tell it to the Judge. Under the recently-relaxed “Obviousness” rule that governs patents, a patent is useless if the idea behind it is only an obvious improvement over an existing idea. eGenera’s patent seems to fall squarely under that rule, from the language in their patent application. Just for kicks, I read it, and will quote it. Under the quotes I’ll comment on what this means, in my not-so-humble and not-so-attorney opinion.

It starts out with a SCREAMING cry of obviousness in section 1…

“A method of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising: generating a specification that describes a configuration of processing resources of the primary site; providing the specification to a fail-over site having a configurable processing platform capable of deploying processing area networks in response to software commands; using the specification to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. “

That, my friends, is commonly known in the IT field as a failover cluster.  The link even defines the N+1 method that eGenera is using in its product. The short of it – You have multiple boxes on a network that are mirrors of one another. One fails, another takes over its role. There’s usually hardware or software in-between that keeps things synchronized and detects the failure. This part of the patent is worded to be host-, network- and processor-inclusive, which would be obvious because most clusters are situated on networks, don’t necessarily need to run the same processors, and are hosts. The “big” improvement is in the use of the term “site” – where the product is meant to restore an entire data center’s configuration. In the press release, this means that if you have four data centers and one disaster site, if any one data center fails, the disaster site takes on the complete configuration of the failed site (i.e., all nodes, network configurations, etc.). This is a huge step forward in disaster recovery, but it’s not patent-worthy because there are a zillion ways to do this.

Here’s one – If you put 100% of your data center(s) onto VMware’s VI3 with shared storage, and had a huge WAN pipe between each site to cover the overhead, you would have this “network in a box” N+1 approach because ESX provides virtual network switching and virtual guest machines, without having to worry about the value of N except in scalability terms. The same is true of most Xen-based products, like Virtual Iron. I’ve been doing this for years on a much smaller scale. If my main data center drops off the face of the earth, I’ve got all of my critical systems in VMware, with snapshots and backups of the machines and the storage they’re on, as well as configuration of the virtual switches. If the worst happens and my data center goes down, my staff and I drive over to a remote office, restore, and have exactly what eGenera is talking about – a complete restoration of all configurations at a remote data center. The method – backing up virtualized systems. The process – recovery to virtualized systems. It’s not as slick as an automated system, but we’re getting to the point that eGenera talks about in its patent (thanks to an Open Source project called Openfiler and some extra bandwidth on our WAN to handle sending snapshots between sites rather than driving tapes around).

Soon, the site backups will be automatically done over WAN links, meaning that when something fails, I switch over to the DR network and everything comes back online from the virtual machine snapshots / backups. It won’t be long after that until we automate that switchover, and have exactly what eGenera is describing. It’s been a long process because that deep a level of DR isn’t a critical requirement to the business, but it was obvious where we wanted to go, and obvious how we needed to get there – through the use of virtualized host and network environments.This brings me to the next few sections:

“2. The method of claim 1, wherein the act of using the specification to generate commands to deploy processing resources is in response to the receipt of a fail-over condition.

3. The method of claim 1, wherein the act of generating includes specifying data specific to device configuration at the primary site.

4. The method of claim 1, wherein the act of generating includes specifying management information relevant to the primary site. “

Summary – we’re patenting how we’re going to do what we claim, how we’re going to encode the configuration, and how we’re going to send notifications when it kicks off. All irrelevant if the concept of the patent is obvious. Also these are all in themselves obvious – any virtualized system has to have information on configuration of the underlying virtualized hardware. Any cluster sends notification on a failure of a node. Any mirrored port has awareness of the cache in its partner port. Outside of the patent office, and in the IT office, this stuff goes without saying.

Next up, section 5… this is identical to section 1, except the last few words: “wherein the specification describes all of the independent processing area networks.”. There’s no significant difference as to obviousness here – it’s just the parts of the patent which are different from each other that are referenced. This is a matter of scale – rather than a “subset of the plurality” (English: part of the whole) this is a “master file” of the entire environment being monitored. It adds grid technology into the mix, another obvious case for virtualization.

Section 6 changes the plurality part to “a minimum configuration of processing resources at the primary site”, which is just saying that the system will use the most efficient (i.e., minimal) configuration possible to get the job done. Duh. Do I have the same number of VMware hosts at my remote sites? No, I don’t. I don’t even always have the same builds or even the same versions! Do I have all of the same configurations? No. Can I really bring up 100% of my data center at a remote site? Sure. And eat a performance bullet on the servers.

So what would I do? I would bring up critical systems only – Active Directory, DNS, Email, mission-critical apps. My Jabber server would stay down. My dev environment would stay down. I would run the minimal configuration I need to get the business working. Can it get any more obvious than “if I don’t have the all the resources I need, I’ll get by with what I have, the best that I can”?

The seventh section tacks this part onto the end: “…wherein the primary site includes a configurable processing platform having a pool of processors and wherein the act of generating includes specifying information descriptive of how the processors are pooled at the primary site. “A pool of processors. Also known as a computing grid. And it goes on to describe having a documented system for how that grid works, and tying that to the application. This is truly obvious. If you have a system, you document it. If you have an automation system, it’s documented, and it uses documentation on how the system it automates functions. This sort of thing has been around forever.

On a non-grid level, Detroit has been outsourcing human labor to robots using this exact methodology for decades. On grids, this is how they work… regardless of distance. Each node is aware of each other node, and so the grid has an internal documentation of the pool of resources that are available.

Section 8: “The method of claim 7 wherein the act of generating includes specifying data to describe logical partitioning of processors and interconnectivity into logical processing networks. ” This is the virtualization component. In virtualized systems and virtualized DR products like VMware’s VMotion, this is a core component of how the systems work to provide fault tolerance. The service console knows what virtual machines are out there, and what host systems are out there. It has descriptive information about the resources, how they’re pooled, and how they’ll be moved in the event of failure. eGennera’s idea is obviously a small improvement to the process, applying it towards a virtualized grid concept, but it’s not a huge leap forward (again). Virtualized grids have already been in the works for some time. See here and here.

Section 9 states:

“A system of providing processing resources to respond to a fail-over condition in which a primary site includes a configuration of processing resources, comprising; a computer-readable specification that describes a configuration of processing resources of the primary site; a configurable processing platform capable of deploying processing area networks in response to software commands; logic to generate software commands to the configurable platform to deploy processing resources corresponding to the specification; wherein the processing resources at the primary site include a plurality of independent processing area networks and wherein the specification describes only a subset of the plurality of independent processing area networks. ”

In other words, the systems to the methods described above.

Section 10 is similar, switching plurality for totality like sections 1 and 5 did. So they’re going to build a computer system to do what DR specialists and Virtualization specialists have been doing for some time now, only under a commercial brand. Seems obvious to me.

The next sections reference art that I won’t reprint or link to here, as they’re not very original. In fact, the art is as obvious as the concept for this patent. I won’t need the art to describe the obviousness of some of what is printed in the text. Here’s my favorite example so far:

“To date, a considerable body of expertise has been developed in addressing disaster recovery with specific emphasis on replicating the data. Processor-side issues have not received adequate attention. To date, processor-side aspects of disaster recovery have largely been handled by requiring processing resources on the secondary site to be identical to those of the first site and to wait in standby mode.” 

Processors have not received adequate attention because in virtualized environments, they are largely irrelevant as long as you’re not mixing widely different types (such as AMD and Intel). You do not need to maintain identical processors, or quantities of processors, or anything like this. I can restore the virtual machines running on my Intel dual core Xeon servers on my Intel single core Xeon machines with a great deal of flexibility amongst processor family types. Does it matter if one is 2.8GHz and another is 1.6GHz? Not really. The processors at my remote sites aren’t sitting in standby mode, either. They’re running apps on the local servers. They are live, running, and chugging along. They’re ready to load up more virtual machines and take over the load at any time.

So, considering the giant logical fallacy presented here, I’m left wondering if there’s even a need for this patent. I could get REALLY brave and open up a huge pipe to the remote sites and run shared storage and VI3 over my WAN… assuming I had unlimited funds for a 1+gig WAN pipe, and then I could get away with having no other process beyond VMware’s built-in recovery with VMotion, CB, and HA.

And yes, I recognize that not everything can be virtualized, but in all honesty, what eGenera proposes is no less disruptive and impactful to a data center than that of virtualization. The rest of the document gets into specifics and details that are very patent-sounding, detailed diagrams, how the parts of the product will work, a definition of PAN (processor area network, as opposed to personal area network), how control nodes manage the environment, etc.

Here’s the summary: We’re going to build a set of interconnected boxes that will virtualized your environment down to the tiniest level. Then when something fails, we’ll load up resources at a remote site and make it all come back online.

Can it get any more obvious than this? It seems like eGenera is using patents to block competition. It strikes me that the folks as eGenera collectively went “Oh, I have an idea to improve this and this and this. It seems kind of logical, but we should patent it so nobody else with the same idea can compete with us without licensing from us”. It’s a great product, but it uses existing technology and existing ideas about how to use technology to provide a product that is already out there, just not in a commercial pacakage. I personally don’t think the patent will stand up to challenge, given the recent changes to patent law.  


June 20, 2007  10:29 AM

The End of the Appliance As We Know It, And I Feel Fine



Posted by: Joe Foran
Application virtualization, Joseph Foran, Virtual machine, Why choose server virtualization?

I am writing this little op-ed piece in lieu of a full-blown obituary. Why? the market speaketh, and it declared hardware-based appliances dead. Like dinosaur-dead, dead. Sure, some of the specialists may survive, just like the Crocodile and the Shark have managed to keep evolving and avoid the big-rock-hit-earth-make-dino-dead era.

From the Metaphoric Journal-Register:

Appliance, Mr. Hardware B. passed away on June 19th, 2007. Mr. Appliance was renowned for the uncanny abilities to bith create controversy and save money. During his career he was the muscle behind most modern network equipment, many network security services, the complete setup of numerous small and home office businesses, and a host of other specialized IT functions. His ability to reduce cost and complexity is duly noted and many have expressed great appreciation for his efforts. While many did not agree with his one-device-one-task approach, his fame and popularity continued to rise even in conflict. He is survived by one child, a Ms. Virtual Appliance. Said Ms. Appliance in her eulogy “My father was of great service, and it is with great pride that I take up his mission. I promise to provide the public with the same services, the same muscle, and the same fiscal attention. Furthermore, I plan to take his vaunted career one step further and sever my ties to proprietary equipment. I know Dad would have been proud of this decision, which will give greater economic and administrative freedom to you, my beloved supporters.”

Why am I, at the risk of sounding like the world’s biggest (well, you can insert your own word here), being so haughty as to declare the hardware appliance dead? Because hardware is mattering less and less in the commodity server market, and it’s bleeding over into the commodity appliance market. Hardware appliances were great – they did one task (or one category of tasks) very well, had minimal overhead, and were often cheaper than a full-blown server-and-software solution. Who uses their own server-based routers? Not many people. And yet Cisco has gone quite far in undocking much of IOS from the hardware, a move that (among other things) is good for virtualization. If you’re looking for a security appliance, you could buy a Symantec hardware appliance, or you can download any number of similar appliances from VMTN. Inboxer makes an email archival hardware appliance. They also make a virtual appliance. Need a NAS or iSCSI SAN? These even come in virtual appliance flavors like the Openfiler appliance – and they’re great for taking those old hanging-chad JBOD storage arrays off their legacy hosts, linking them up on a single host, and converting to centralized storage. Zeus‘ network traffic monitoring hardware appliances are now available in virtual appliances. This list of links goes on and on and on, and it shows an interesting trend; that hardware appliances are giving way to virtual appliances across most of the market.

And like the big-rock-hits-earth scenario, it’s happening fast. It’s not flashy like a meteor strike, but its just as quick – a few short years and hardware appliances will take a backseat behind the virtual appliance mammals. It’s cheaper for vendors to work on the software and not have to integrate it onto hardware that can change from revision to revision, and it’s as easy for customers to deploy and manage virtual appliances as it was to do the same with their dinosaur cousins. There’s even an extra layer of manageability with virtual appliances, since you can manage the hardware. A huge boon – business continuity. There’s more built-in DR/BC in virtual appliances that you just don’t get in hardware. Wanna be ready for DR? Ok, get all of those hardware appliances duplicated. Or take snapshots of your virtual appliances. Which is easier? Which is less expensive? Which is more rapidly verifiable?

What about the performance hit? In all but the most demanding cases, such as a core switch or the load balancer for the storage arrays of a Fortune 500′s ERP systems, the 10% degradation of performance caused by virtualization is of minimal importance. These then, are the crocs and sharks of the new era – highly specialized, long-term survivors that will continue to proliferate when the rest of market enters the long sleep of a the virtual asteroid impact.

Mitchell Ashley’s blog on the same subject takes a similar look here, and even uses some of the same analogies I use (I was quite grumpy when I found that mine wasn’t a very original thought, but such is what it is).

A few more links for the weary web-traveler:


June 19, 2007  2:54 PM

Get a VMware job and increase your salary by 115%



Posted by: Ryan Shopp
Links we like, Virtualization, VMware

Alessandro Perilli, SearchServerVirtualization.com contributor and owner of virtualization.info, has a fantastic virtualization-related jobs section on his site. At the time of this post positions are mostly for VMware gurus, but a few are for sales. Locations range from coast to coast in the US.

Check out the virtualization.info job board now — but not while your boss is watching… ;)

Another great site if you’re looking for virtualization-related jobs is indeed.

If you’re still not convinced you should switch jobs, vi411.com has a post from January 22, 2007 that says VMware salaries are 115% higher than average data center salaries.


June 18, 2007  2:34 PM

Linux users: Xen, VMware, or Virtual Server?



Posted by: Ryan Shopp
Microsoft Virtual Server, Red Hat, Servers, SUSE/Novell, Virtualization, Virtualization platforms, VMware, Xen

In Monday’s newsletter column, I included a question to our Linux users: Do you prefer Xen, VMware or Virtual Server, and why?

It’s only Monday afternoon, but I’ve gotten some interesting responses. Chris, the CIO for Oxford Archaeology: Explorying the Human Journey, wrote:

In response to your question, we prefer VirtualBox, which offers a degree of flexibility that only VMware VI3 gets close to. Without the entry costs! We currently are working with VMware server, and a lack of a Linux client for VMware VI3, along with its MS SQL dependency, prevented a planned migration to VI3. VirtualBox is the young upstart on the block; the list of features that it is currently lacking in comparison with ESX grows shorter at an alarming rate, it is cross platform, independent of hardware extensions (but can benefit from them), high performance, and remarkably quick to get to grips with.

David of Code No Evil, LLC wrote:

I prefer VMware because it’s a non-free commercial product with support.  Microsoft, for example, doesn’t even list in their support site VPC 2K7 as a product.  As for Xen, I’m rarely a proponent of the OSS community.  As for VMware, my current support case just became a known bug # 154399.  Nice to know that VMware was willing to admit a fault in their platform and intends on fixing it. 

I asked him for clarification on the bug. Here’s what he said:

I am running Vista x64 on a Mac Pro.  My intent was run XP off the hard drive from my old machine (a Dell Precision 340) in a USB enclosure using VirtualPC 2K7.  VPC crashed every time I attempted to access the virtual drive (mapped to the physical drive).  Support is non-existent for VPC 2K7 because Microsoft doesn’t even list it as a product at the support website.  I even reached out to the  “Virtual PC Guy”0, but he was no help either.  At this point, I figured that I should try VMware Workstation.  At least if it didn’t work, I could open a support incident and I’d get some help.  Well, long story short, there is a permissions issue that despite going back and forth with VMware tech support (in India none-the-less) was irresolvable even in VMware Workstation.   The support overall was not bad.  A few times I had to send an extra email to get them to wake up, but all-in-all it was satisfactory.  The rep even called me because the issue became too difficult to talk about on the phone.  Now, the real test is to see how long it will be before a fix is released.  I would gather that it will  be soon because this bug precludes anyone from using a VMware virtual drive instance mapped to a physical drive on Vista.  I would, as a developer, classify this as critical defect.

Richard of OnX Enterprise Solutions Inc. wrote in suggesting Virtuozzo.

Chris, a system architect, wrote in with his preference for Xen:

I prefer Xen as it’s free on Red Hat 5 or SuSE 10 for Linux environments.  EMC ESX rocks though if customers can afford with its small Linux Red Hat kernel and the various tools for both Linux and Windows environments.  MS Virtual Server is better for test labs and with Microsoft platforms.  I had a very bad experience with MS Virtual Server and NetWare systems although that’s another OS.

Possibly with improvements in MS Virtual Server there might be a point where if it’s free and if MS really does support Red Hat and especially SuSE underneath it that if it’s free with MS licenses that it could move up within the server marketplace.

More to come. In the meantime, what are your thoughts, readers?


June 18, 2007  11:06 AM

Virtualization community responds to ESX Lite rumor



Posted by: Alex Barrett
Uncategorized, Virtualization, Virtualization platforms, VMware

In case you missed it, SearchServerVirtualization.com reported last week that VMware is developing an embedded “ESX Lite” hypervisor. And while VMware may have opted not to comment about it, the virtualization community has plenty to say on the topic. For one, Bob Plankers, over at The Lone Sysadmin thinks that ESX Lite could save him money on server hardware:

So you have an ESX server that doesn’t need local disk. That saves you $300 for a RAID controller and about $300 per 15K RPM 146 GB disk. For my RAID 1 + hot spare configurations that’s $1200. No moving parts equals theoretical better reliability, though flash drives have a limit to the number of read/write operations they can do over their lifetime. Also very little power consumption, and very little heat. Without all the extra heat from the disks you can reduce the number of fans in the chassis, which further reduces the price and power draw.

I for one, totally agree with this assessment. Spinning disk drives inside a server are a major bummer. Since the vast majority of ESX instances are already SAN-attached, why not go all the way and ditch the internal boot drives?

The flipside, said Fred Peterson, a system administrator writing on the VMTN message board, is that an ESX Lite appliance could not be reused like general purpose hardware:

Once it becomes “out dated” it has to be tossed, you wouldn’t be able to re-use as a test windows box or linux box or something. While not a bad thing, its life span to justify the upfront cost would have to be pretty good.

Over at MindSecure, a blog about “information security, virtualization, application delivery and storage,” ESX Lite is paired with Citrix Ardence, an OS streaming application, to positive effect.

Embedding ESX Lite in the hardware and using Ardence to stream the operating system would allow for complete hardware abstraction at the server and desktop level as well as the ability to remove spinning disk from servers and desktops, use solid state storage strictly on these devices, reduce storage utilization by using Ardence shared images, reduce cooling costs in the data center by using less disk, and many other advantages which these two solutions provide when paired together.

Scott Lowe on his blog says that ESX Lite has interesting competitive implications:

It’s smart because it derails Microsoft’s attempts to marginalize the hypervisor by bundling it with the operating system (via Windows Server Virtualization, aka “Viridian”). It’s smart because it expands the hypervisor market in new directions that no one else has yet tapped, helping VMware retain mindshare about its technical leadership and innovation. It’s smart because it’s the hardware vendors that have the most to lose via virtualization, and by partnering with them you remove potential future opponents.

But it’s a strategy that his his risks, he points out, namely, if the embedded hypervisor doesn’t perform as well as regular ESX, or if VMware loses visibility by going too deep under the hood.

Meanwhile, rumor has it the the original story has some inaccuracies in it, but like the old advertising saying (“I know half my advertising dollars are wasted – I just don’t know which half!” ), without official word, I can’t speculate as to what’s right and what isn’t. An obvious possibility is that Dell is not participating with ESX Lite, or that the effort is not limited to just Dell. My gut tells me the latter is closer to the truth. Any thoughts are appreciated.


June 14, 2007  4:26 PM

Nat Friedman: Swing toward desktop virtualization favors Linux



Posted by: Jan Stafford
Desktop virtualization, Microsoft, Virtualization

In the past six months, every single IT exec or manager who discusses Linux desktops in a corporate setting with Nat Friedman asks about thin-client enviroments. That’s why Friedman – co-creator of the open source Ximian desktop and open source strategies CTO for Novell — predicts that desktop virtualization is going to take off faster than anyone has anticipated, and Linux desktops adoption is going to increase rapidly as a result.

“The pendulum is swinging back, and there’s an interest and need to centralize data for security reasons. IT managers and corporate execs don’t want people to walk out with laptops holding, say, millions of Social Security numbers.

Centralizing desktop management via virtualization and thin clients holds the much-desired promise of easier management, Friedman told me in a recent conversation.

There’s a desire to have lower-cost manageability by having all your applications running centrally and making thin clients into dumb terminals. Virtualization plays a role there, because on the server you could host thousands of desktops and virtualize those sessions so they’re all isolated from one another and run on an operating system that’s transparent to users. Or, you can use multiple desktop apps running on multiple operating systems. You can have computers running OpenOffice, Firefox, Microsoft apps and so on all this playing onto a single thin client. Virtualization makes it possible to dynamically allocate the resources for that. The desktop itself running virtualization locally developers do that. If you run Linux primarily and you want to run Windows for one app, virtualization is one way to get at that.”

In a virtual desktop setting, Friedman concludes, IT managers will be able to choose best-of-breed, easiest-to-manage and lowest-cost applications and operating systems. He thinks Linux and the desktop applications that run on that platform will gain from this interoperability.

I agree with Friedman’s views on how quickly desktop virtualization will be adopted. My team has been surprised by the number of IT managers who’ve expressed keen interest in moving forward with projects. I do think Linux will gain some users from this trend, but I think the key stumbling block will be getting IT shops to evaluate Linux-based desktop apps in the first place. Historically, they’ve taken the easy route, Windows and Microsoft apps.

What do you think? Let me know via your comments or an email to me at editor@searchservervirtualization.com.

For more of Friedman’s views on the desktop marketplace, check out this post on SearchEnterpriseLinux.com.


June 14, 2007  3:52 PM

Virtualization and next-generation grids: What’s really NG? What’s just a fad?



Posted by: Jan Stafford
Virtualization, Virtualization strategies

Does a grid by any other name smell as sweet? In years of covering grid computing technologies, I’ve seen the definition of “grid” changed to fit vendors’ products or the computing flavor of the month.

In general, I see the most basic function of grids is creating virtual communities of servers, applications and users. (Let me know if you see it otherwise.)

So, when I heard about virtualized service grids, I wondered if the “virtualized” moniker just get added because virtualization is hot right now? Or, is this a real next-generation grid model. Well, there’s a lot of activity in this space, as I’ve seen when reading Virtualization and Grid Computing blog, which has been a great resource for me. I see, too, that endors seem to be hopping on board. For instance, on the Inside HPC blog, I read that grid vendor United Devices is pursuing creation of virtualization products.

Recently, I asked Ash Massoudi, CEO and co-founder of NextAxiom, a virtualized service grid technology firm, some basic questions about virtualized service grids. Here’s an excerpt from our exchange:

What’s the difference betweeen traditional grids and virtualized service grids?

Massoudi: “The first difference is in programming models used by each. In traditional grid computing, it becomes a programmer’s responsibility, through the use of a dedicated library, to build an application that is designed to run on the grid. So, the programming model requires programming to the grid. In a virtualized service grid, software business and integration components are assembled using a Service-Oriented-Programming (SOP) technique that requires zero-knowledge of the computer resources. The application developer doesn’t need to explicitly identify the load and how it is allocated or to create work units accordingly. Each business or integration component is a service (implicit work unit) that can be composed of other services. The same Service Virtual Machine (SVM) that runs the final application will transparently externalize and distribute the service load across all available computer resources.

“Another difference is that service grid virtualization has a built-in concept of application multi-tenancy and thus favors scaling-up, through multiple-cores, over scaling out as is common with traditional grid computing.”

Why should IT managers take a look at service grid virtualization? What benefits can it bring to their companies?

Massoudi: IT managers should consider service grid virtualization since it reduces TCO across human capital as well as machine resources. Also, the business and integration services that are programmed and virtualized on the service grid provide a way to directly tie their efforts to the tremendous business value that they are creating.

What type of company would use service grid virtualization?

Massoudi: “You need significant IT expertise to run and operate a Virtualized Service Grid (VSG). Large enterprises who already operate data centers and need composite and flexible applications across their existing legacy systems should think of owning and operating their own service grid.”

What type of IT infrastructure is a good fit for service grid virtualization, and for what apps is it appropriate?

Massoudi: Multi-core processor architectures like the dual-core Intel Itanium 2 processor provide the most cost-effective and efficient foundation for Virtualized Service Grids. The more tenants you can run on a single machine the higher the efficiency of the service grid. Service grids are most suited for creating any composite business application or business process that needs to integrate across departmental application silos or enterprises.

My research continues, as does the job of separating the wheat (real technologies) from the chaff (vendor hype). If you’re involved with virtualized service grids — either as a user or developer — or other next-generation grid models, please comment here or write to me at editor@searchservervirtualization.com.


June 13, 2007  2:38 PM

Parallels Server



Posted by: Joe Foran
Application virtualization, Joseph Foran, Uncategorized, VDI, Virtual machine, Virtualization, Virtualization management, Virtualization platforms

While browsing another blog, the famous virtualization.info, I came across a very interesting story of Parallels making an alpha code release of it’s new server-based product. As I mentioned in a slightly-off-topic post, my ears are perked because of the interest the Coherence has generated, with it’s seamless (almost Citrix-y) windows into the guest OS.

I’m really hoping to see what Parallels does with Coherence on the server level. While there are a plethora of ways to administer a heterogeneous server environment (ssh, rdp, vnc, mmc, e-i-e-i-o), Coherence in the mix of remote administration is an interesting proposal. How much further can it be taken – can it, instead of being host-based, become central-management-server based? Picture how Virtual Center allows remote administration of VMware virtual guest, from the virtual machine settings to the guests’ interfaces, plus all of the other settings involved. Add in a ONE-SCREEN management interface, with everything packed off to a Coherence Manager, and imagine how much simpler things can become. Application management tools that don’t work well over remote sessions, direct access to ini/conf/whatever files on a server without extra steps to get there, an organized toolset for administration that makes the mmc look tired… very interesting stuff.

Taking it to the next point, virtual desktops… Parallels supports DirectX and OpenGL (so does VMware’s Fusion, but I liked the beta of that much less than Parallels Desktop after putting them both through the ringer). That support makes VDI a lot close togetting over the hump of multimedia issues that bar it’s large scale adoption. Just as Citrix and other thin clients never reclaimed the desktop over PCs, I don’t doubt that virtual desktops will remain a niche market. I do think, however, that remote-coherence has the same opportunity as Citrix’s ICA (and competitors products as well) to be an excellent value-add for remote application deployment, right up to and including a full desktop. As it stands now, we have a number of users here who have very old applications that don’t work well under Windows XP, yet they just can’t go away (some are government-mandated apps), so we use VMware Player to dish them up in a virtual OS. I’d like to use a Coherence-like product instead, to eliminate a lot of the headache associated with end-user education and change management. To take it one logical step further, I’d like to use a Coherence-like server-based product, to keep those virtual machines off the local desktops and under my department’s management and deployment. If it means buying an Apple XServe or two to support it, so be it. We’re a mixed Windows, Linux, and BSD shop as it is, so that wouldn’t be a big deal in overhead and support. I imagine that’s the case in many environments.

I’m hoping for a sneak peak, being the Parallels geek that I am.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: