The Virtualization Room


August 7, 2007  8:45 PM

Novell announces new data center management solution: ZENworks Orchestrator 1.1

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

LinuxWorld vendor news

The cry for better virtualization management tools has not gone unheard — at least not by Novell. Today at LinuxWorld, Novell announced a new release of its data center management solution: ZENworks Orchestrator 1.1. Not only does it improve management for a data center that incorporates virtual machines, it manages (or “orchestrates,” if you will.) both the physical and the virtual parts of the data center, doing so by overseeing a collection of management tools.

According to Novell’s press release, the 1.1 version should make implementation easier, and give users the ability to pick and choose which management tools are installed onto their systems. 

Orchestrator handles resource management, job management, dynamic provisioning, policy management, accounting and auditing, and real-time availability.

The 1.1 version features a new interface and full lifecycle management. The orchestration engine allocates overall data center resources to be installed and run separately from specialized management components, such as virtual machine management.

Full management for SUSE Linux Enterprise from Novell running Xen virtualization is available with the new 1.1 version.

For more information, visit Novell’s ZENworks Orchestrator Web site.

August 7, 2007  8:15 PM

Some Common Sense Ideas When Designing a Virtualization Environment

Ryan Shopp Joseph Foran Profile: Joe Foran

This is my attempt at putting together something that every sysadmin and supervisor should have when they look at whether to take the next step in virtualization – a Common Sense (tm, patent-pending, sm, r) guide to putting virtualization in place. Truth be told, it works with any server virtualization product, including VMware’s VI3, ESX2.x, and Server, Virtual Iron 3.x, Xen 3.x, etc., but as I’m more familiar with VMware’s product line, that’s my default reference. I’m also taking some creative liberties and coining a new phrase – servirt, short for server virtualization. Lets see if it catches on (no, I don’t really expect it too – it just doesn’t sound as cool as Bennifer or Brangelina).

Rule #1: Don’t put your firewall in your servirt environment.

This rule, along with any future “don’t put your X in your servirt environment” rules, is geared mostly towards security. If your host system is compromised, it’s not a far step before your guest systems are compromised. Given a proper topology, a compromised set of virtual machines is no more dangerous than a compromised set of standard machines, but throw in an outward facing device and you have all the makings of the next NY Times poster child for data-protection reform. It is only a matter of time before serious servirt malware is designed to sneak into guests on an infected machine through the hypervisor (so called Guest-to-Guest, G2G, or GtG attacks). Imagine this situation – a host running Exchange, Oracle, Siebel CRM, ISA, and Sharepoint as guests. The host is infected, finds the app servers, infects them with a data collector, then finds the ISA server, infects it, and uses it as a giant gateway to stream all of your customer records to www.iownzyercreditzfilezd00d.info over http. All without hopping a physical box. For those who say I’m full of it, that such vulnerabilities are impossible or way off, all things are possible.

Rule #2 – Use your virtual switches wisely

It’s tempting to put everything on one virtual switch and then let the host handle the load. This doesn’t often get covered in a lot of howto documents, but it’s important. It’s particularly important in VMware VI3, because of the major improvements VI3 brings to virtual switching and the advantages present in those changes. Honestly, I think VMware should have made a bigger deal in marketing the switching improvements, but I’m not a marketing guru by ANY stretch of the imagination. With the recent Cisco/VMware rumors, the switching in VI3 may get the credit that it’s due. Anyway… Treat virtual switches as you would physical switches – create network segments with care and planning, vlan when necessary (if your product supports it), and make sure that if you have a lot of host servers, that your virtual networks align with the hosts. Packet storms should not take out your entire virtual environment!

Rule #3 – Don’t mess with system requirements. Ever.

Not on the guests. Not on the hosts. Not on the management boxes. Not ever. Very often it can be tempting to put less memory into a guest machine than optimally necessary in order to conserve limited physical memory on the host. Sometimes it’s even tempting to save a thousand dollars on a server (particularly if the spec you optimally need is a thousand dollars per server over your allocated budget) by cutting out memory, dropping the CPU down to a slightly slower model, using lower rpm hard drives, etc. DON’T DO THIS! It may be fine, but it can also come back to haunt you. In the guest-scenario, it may seem easy to say “If I hit a problem, I’ll just up the guest’s RAM”, but it’s a lot tougher saying that when the physical machine is maxed-out and is full of other in-production guests using up all that RAM. I’ve done this. It sucked. I had to take down a host server that impacted five departments, including finance, because I tried to squeak through without spending the dollars at a time when I was under a serious budget crunch. Oh, and the guest server in question that needed more RAM – a Jabber -based internal Instant Messaging server. Not exactly mission-critical, but it had a high profile because it was very visible to the entire company every time it mem-locked and dumped out. Lesson learned.

Rule #4 – There is NEVER a rule #4.

This is from an old USENET post, and I can’t find the reference to link it to. It was funny back in the day, and I’ve kept it up since.

Rule #5 -Use the freebies!

XenSource XenExpress, Xen itself, VMware Player, VMware Server, Virtual Iron Single Server Edition, and a host of other similar applications are free to try, and free to use. Some require host OS licenses that aren’t free (ahem, Microsoft, that’s you guys…) but most will run on free OSes like Linux and/or FreeBSD. I have a whole lab set up with Virtual Server on CentOS Linux, and it works great. We use VMware player to distribute some legacy applications that don’t play well on XP and/or Vista. Also, don’t forget about the free P2V tools out there. VMware’s free P2V converter is great – almost as powerful in the P2V side as enterprise products like Platespin’s PowerConvert. While we wait on new hardware to test Virtual Iron, we’re using a great freebie tool, that we found here to get a jumpstart and convert some of our testlab vmware machines to Microsoft’s VHD format, which we will then import into Virtual Iron. Before we even decided to do virtualization (ok, after we decided virtualization fit the business/finanacial/techincal needs of the company, but before we commited to it) we used demo versions of VMware as a proof of concept. The point is, there’s not a stage in your servirt environment’s development that can’t benefit from the judicious application of a little frugality. Except when it comes to system specs (see rule #3).

Rule #6 – Read the Famous Manuals

And the white papers. And the Wikipedia entries. And the promotional marketing material (if you’re into that kind of pain). In the case of Virtual Iron, read the forums… you might just find the install and admin docs there (yes, that’s a criticism of your website, Virtual Iron) Read whatever you can read on the subject of your servirt environment. For example, when the company I’m with went looking at VI3, I read through a ton of literature and came across an HP document that was immensely valuable. I also found this chart very useful, albeit becoming outdated. When we first embarked on our trip towards virtualization there must have been a gigabyte of material on my hard drive about VMware’s product offerings (at the time, that consisted of ESX and GSX). As we’ve progressed, I’ve accumulated a small library of PDF files, demo software, and links. Small like the New York Public Library is small.

Rule #7 – Don’t put all of your Active Directory domain controllers on the same hosts.

If you do, you’re in for trouble when a host falls over and goes boom. And they do, once in a while, fall over and go boom. Or they may face a G2G attack, in which case your entire AD environment is hosed. If you’re a Novell shop, good for you, but don’t put all your eDirectory servers on one host either. Red Hat Directory Server shop? See above. If you’re using VMware’s VI3, make sure that HA/DRS is configured to prevent all your directory servers from being on the same host, because even if you design it so it won’t happen by laying out the AD controller guests on different hosts, you’re just a slice of probability and a few resource utilization spikes from DRS putting them all on the same srver for you. Me, I leave one AD controller out in non-virtual land just because I can (ok, because I have a spare old server that does nothing else).

Rule #8 – Document Everything

The usual rule of document everything goes here, like it does everywhere. I won’t go into the obvious points, but there are a couple of not-so-obvious points that need to be mentioned. Naming conventions… there’s been some good talk about this, and I won’t repeat it, but remember to name your servers appropriately so when you do a quick network scan you can tell what’s what from the resolved names. Remember, not all management happens in a console, even if you are 100% virtual. What’s supposed to be where… this can change a lot in a well-designed servirt environment because of HA/DRS and similar tools, but document the starting points for all virtual servers and take regular performance metric updates to see what has moved where and why it’s moved.

Rule #9 – Switches and NICs, Switches and NICs, Gonna Get Me Some Switches and NICs.

This is about NICs, really. Lots of NICs. You can never have enough NICs. Fill every slot you can with NICs. Have the available external switch ports to support those lots of NICs. Why? Because some applications will eat your bandwidth like it was Kibble n’ Bits. That means some virtual machines will choke others, given the chance. To get off the 80’s dogfood commercial metaphor and onto a gardening metaphor, bandwidth is like sunlight, and some apps are like pretty weeds. The soak up everything they can, leaving little for others. You can’t kill them, either. If you find you’re in a situation like this, having lots of NICs in your server can make all the difference, because now you can add it to the virtual machine weed you’ve got and essentially transplant it away from the rest of the environment. Some care needs to be taken with HA/DRS, but in that case you need to look more at teaming and aggregating those many NICs and switch ports properly.

Rule #10 – Storage

In some cases, violate rule #5. In our lab, we started with freebie OpenFiler as an iSCSI solution, until it came time to test VMotion. Sometimes it went boom. Other times it was fine. We couldn’t figure out why until we followed rule #6 and found out it was a problem with IET’s (the iSCSI target under OpenFiler) use of SCSI commands vs. VMware’s interpretation. The point being, this is an extension of rule #3… only about storage. Having the right storage environment is crucial… you can have your servirt environment set up 100% perfectly, but if your storage isn’t 100% perfect you’re going to run into all kinds of problems with moving guests around. Since that’s the whole POINT of virtualization’s DR advantage, having a bad storage strategy is essentially having a bad DR strategy. This is to say, anything under 100% on both is an F… because in DR there are no Bs, C, or Ds… just an A+ and an F. For what it’s worth, the problems with OpenFiler and VMware seemed to be fixed at this point, and we’ve gone back to using it in test environments for possible production use once we have 100% confirmation.

Well, that’s it for now… another set of these common sense ideas will probably be forthcoming (maybe after I’ve finished playing with Virtual Iron in the lab and actually get around to posting that long-promised review).


August 7, 2007  9:59 AM

Confirmed: VMware and NetApp sitting in a tree

Alex Barrett Alex Barrett Profile: Alex Barrett

Not to brag, but my recent story, ‘VMware and NetApp in cahoots?‘ has been corroborated this morning by a Network Appliance press release, which states that the two companies are collaborating on joint engineering, marketing, service, and support.

Specifically, NetApp cites development work on advanced application mobility, backup and recovery, and disaster recovery.

The release goes on to say that the two companies “are also targeted at further simplifying disaster recovery, streamlining test and development environments, optimizing storage utilization of virtual desktop consolidations, and ultimately creating a tightly integrated server-to-storage virtualization solution…”

The two companies are also working out an enterprise support agreement for streamlining case management when issues arise in joint VMware/NetApp environments.

NetApp’s fortunes took a turn for the worse in the past week, as the company’s stock plummeted after announcing that it had missed its numbers for the quarter. Not to be too cynical, but could this announcement be an attempt by the reeling storage vendor to associate itself with red-hot, pre-IPO VMware? Not a bad move, if you ask me.


August 2, 2007  1:22 PM

The case for chargeback and virtual appliances

Alex Barrett Alex Barrett Profile: Alex Barrett

One of the nice things about covering a hot topic like virtualization is that you get to talk to people who are really enthusiastic. Yesterday, Alex Bakman, founder and CEO of a new company called V-Kernel came to my office, and he was as fired up about the virtualization market as any vendor I’ve met in a while.

What V-Kernel has done so far is relatively straightforward. They’ve developed capacity and chargeback software that monitors the CPU, memory, network and disk resources consumed by VMware virtual machines, and maps those resources to a so-called business service – a grouping of VMs that are performing a business function, like “email” or “CRM.” From there, V-Kernel can generate chargeback reports.

Now, the idea of chargeback has always been nice in theory, but in my limited experience, things never really materialized. Bakman has an explanation. In the world of one-server-to-one-app distributed computing that virtualization is supplanting, chargeback was kind of silly, since resources weren’t shared, and it was easy to tell who owned what application. However, if you look at the mainframe – a massively shared resource – chargeback is alive and well. And when you look at today’s high-end x86 platforms, “I don’t care what you call them, they’re mainframes,” he said. As expensive shared resources, Bakman believes IT will move quickly to push back the cost of virtual infrastructure to those departments that are using them.

Virtualization also has the developer and independent software vendor (ISV) in Bakman all excited. Unlike previous systems management software he’s developed, the big thing that differentiates V-Kernel is that it’s delivered as a virtual appliance.

In V-Kernel’s case, that appliance includes a (stripped) Suse Linux operating system, Apache, Tomcat, MySQL, and a Java Ajax application server, all in 420MB that can be “dropped” right on any VMware host. For updates, the appliance includes one-button self-update feature that “calls home,” as it were, for any new security patches or code updates. And because it’s just a virtual machine, “it’s infinitely scalable,” Bakman said. To add more performance, just assign more memory or virtual CPUs to the VM.

The fact that V-Kernel runs Linux is important to data center managers because that’s the same reliable operating system that powers a lot of the other applications in the data center. Plus, writing the application to run on Linux instead of Microsoft sidesteps a lot of licensing fees. “If I were to ship it on a Microsoft OS, there would be licensing fees for every appliance I ship,” he said. Microsoft’s current licensing policies are already conspiring to turn smaller ISVs away developing for the Windows platform, he said.

Furthermore, the advent of Java/Ajax means that developers don’t need to give up any functionality to run over the Web. “Ajax is an absolute Microsoft killer,” Bakman said.

For now, V-Kernel is available only as a VMware virtual appliance, and as Bakman sees the market, there’s no reason to change that – for now. Over time, Bakman’s vision is to add more systems management appliances to V-Kernel’s bag of tricks. In the meantime, you can download a beta at http://www.vkernel.com.


August 1, 2007  1:50 PM

Virtualization and the Singularity

Alex Barrett Joseph Foran Profile: Joe Foran

I’m waxing philosophical for a bit here… What is the future of virtualization? What role will it play in the unfolding technological evolution of our society? Is the Technological Singularity really coming, and if it does, how will virtualization be important?

For those eternally-optimistic about our species’ future, there’s a great book that incorporates incredible optimism and the predicted result of the merger between biology and technology called The Singularity is Near: When Humans Transcend Biology, by Ray Kurzweil (the same Ray Kurzweil who first made the electronic piano sound like, well, a piano). I’ve been reading it, and two technologies keep popping into my head to solve some of the problems that crop up when I try and wrap my mind around the many charts and predictions in the book – namely grid computing and virtualization. I highly recommend reading this little gem, although I also recommend taking a lot of his timelines with a grain of salt. More on that later…

It’s my opinion that out of all the many technologies that will push the man-machine merger that Kurzweil so eloquently professes is coming, systems-level virtualization will be the most important, followed closely by grid computing. Many of the calculations RK uses to describe the increasing power of computers are based on Moore’s Law, his own Law of Accelerating Returns, and other mathematical formulae. One item I see in the book is that Kurzweil with doesn’t take into account in the book is the impact of utilization levels, which are far, far below the exponentially increasing numbers he presents in his prediction of future raw computing power. He may have considered this and come up with the same conclusion I have (or another, better one), but it’s not in the text, so I’ll go on with the assumption that its a blank slate. We’re going to have that raw computational power, sure, but we’re not going to make the best use of it (at least until our own intelligence is enhanced by non-biological intelligence, which he predicts in the 2030s time frame) without virtualization and grid technology. Virtualization will do what it was originally marketed for – take that pile of 5-20% utilized resources and merge them together to reach as close to 100% as possible. When it comes to controlling nanofactories (which take component elements and make goods from them – like literally making a car from atoms, or for that matter making a 100% real steak without a cow, or making replacement organs from within your own body), expanding virtual reality into realms an immersive as the real world (or moreso), even expanding our own consciousness directly into nonbiological substrates, there are going to have to be computers, powerful computers, running the show. And not all of these computers will be 100% active all the time. Add in the upcoming pervasive grid computing that will eventually connect all spare computing resources, and you can see the power of virtualization. Virtualized systems on hardware to ensure portability between platforms, to maintain the integrity of a system’s purpose, and to house what is needed where its needed, as well as raw computation resource sharing via a grid will maximize the potential of our raw computing power.

The disaster-recovery-friendliness of virtualization is going to make a huge difference in bringing about the singularity as well. One of RK’s predictions, and indeed his beliefs are clear that he feels that this is our destiny, is that we will overcome our biology and become what others call posthuman (he does not use this term, seeing humanity as consciousness, whatever the underlying origin, and independent of hardware, software, or material). He forsees the gradual transformation of humanity through the increasing use of medical implants (such as are used now to replace limbs and hearts, stop seizures, restore sight, restore hearing, etc.), cosmetic implants (anyone ever seen the Tiger Man?), and eventually nanomedicine – nanoscale sized machines working in conjunction to replace entire biological systems (such as the circulatory, respiratory, digestive, and even nervous). What are we then if not the Ghost in the Shell? And what happens when a shell crashes, as it will? That’s what DR is for, and that’s where virtualization plays such a key part – suppose you have a severe blood disease and in the future you are able to replace your blood with respirocytes (tiny machines that act like blood cells). What happens when there’s a massive systems failure of the OS controlling the little things? Probably not much because that’s an application for a distributed OS on a grid system. What happens down the road when your consciousness has moved, via the natural process of extending your life by replacing failing organs, to a completely computerized substrate, and THEN the underlying hardware fails? That gets trickier. Essentially, you die. Unless of course you happen to have VMotion / LiveMigrate / FutureVirtualizationDRTool’sNameHere, at which point you smoothly slide over into the next bank of hardware and keep on living.

This all assumes you consider computerized consciousness living (I do).

Sound a little far-fetched? It does, doesn’t it. But then again, I’m reminded that if I took many mundane items back a few hundred years in time, I would be worshipped as a god for my ability to cure, kill, and perform feats that defy the “laws” of nature as understood at the time. As Arthur C. Clarke once said – “Any sufficiently advanced technology is indistinguishable from magic.” This isn’t to say I agree with Ray Kurzweil’s timeline – I don’t. He’s a self-professed optimist, and while technology will undoubtedly advance unabated by boom or bust just as it has always done, I think he forgets about the pessimistic, particularly about greed – those who develop the technology to power the singularity first will hoard it, patent it, and sell it for exorbitant sums of money for a very, very long time before competition is able to thrive and prices come down to the point where the panacea to life’s many ills are cured. The price of immortality is immeasurable – and the rich and powerful will pay dearly for it. Dearly enough that it won’t be economical to sell to the masses of ordinary folks for a very, very long time, and for a sadly terrible long time people all over the world will have to endure disease, hunger, and death.

But when it does come about, rest assured that the Technological Singularity will be enabled by virtualization. It may not look the virtualization we know today, much like Mac OS X looks nothing like PWB/UNIX, but it will still have the underlying role as we know it today. So that’s my two cents on virtualization in the future – not much different than it is now, but oh-so-important to what will be.


July 31, 2007  6:27 PM

Is virtualization where Linux will top Windows…perhaps stealthily?

Jan Stafford Jan Stafford Profile: Jan Stafford

Virtualization is the theme of the LinuxWorld 2007 Conference & Expo this year, and that’s as it should be, according to industry veterans I’ve interviewed recently. Virtualization a big boon for Linux adoption and a way to steal some of Microsoft Windows’ thunder, they say. Overall, they agreed that succeeding in that space is a make-or-break proposition for Linux.

“It’s appropriate that LinuxWorld focuses on virtualization this year, because virtualization is a must-have for Linux,” said Jim Klein, Information Services and Technology Director for Saugus Union School District in Santa Clarita, Calif. “Without virtualization, Linux will fade away in the data center.”

Klein doesn’t think doomsday scenario is going to happen, however.

“From my experience, Linux and open source virtualization technologies are top-notch, certainly superior to Microsoft’s and reaching parity with VMware’s. The openness of Linux virtualization technologies make it easier to run multiple operating systems in one box.”

On the other hand, some industry vets think that Linux and Xen in its various forms have a lot of catching up to do, and they hope to see some significant announcements at LinuxWorld. Summing up this side of the equation, Alex Fletcher, principal analyst for Entiva Group Inc., said:

“Xen is definitely mature enough to warrant consideration by corporate accounts. Recent moves, such as Red Hat Enterprise Linux (RHEL)5.0 adding Xen as its fully-integrated server virtualization functionality are intended to spur corporate adoption of Xen, but will need time to play out. Granted, RHEL is a fully-robust operating system, but this is the first release that’s included Xen, giving risk-adverse decision makers reason to hesitate. Efforts like libvirt, an attempt to serve as a stable C API virtualization mechanisms, have potential but need to mature.”

Then again, others said, many factors weigh in Linux’s favor in the virtualization arena. For one thing, RHEL and SUSE are very robust enterprise-level operating systems. For another, Linux is not fully-dependent on Xen’s success, because VMware is optimized for Linux. The proven reliability of Linux in data center deployments is another plus. Indeed, consultant and author Bernard Golden believes that virtualization will pave the way for wider usage of Linux. Virtualization makes stability much more important, he said, because after virtualization more systems run on a single piece of hardware. In this situation, he thinks Linux is a better choice than Windows, as Linux has a better track record for both stability and uptime.

Virtualizing Windows-centric applications on top of Linux is a good path to follow, said Golden, author of the soon-to-be-released book, Virtualization for Dummies:

“For those companies that need to move aging Windows applications onto new hardware and want a more stable underlying OS, virtualizing Windows on top of Linux is a perfect solution. Also, Linux’s scalability marries well to two trends driving virtualization: the increasing power of hardware and Linux’s ability to scale across multi-processor machines.”

Microsoft-centric IT organizations probably won’t rush into virtualizing on Linux. In particular, said Golden, sticking with Windows could suit companies that are not ready to make a full commitment to building a virtualization-based infrastructure. He explained:

“The upcoming virtualization capability in Windows Server 2008 — and beyond, given that much of the previously-targeted functionality for Server 2008 has been dropped — will enable [those organizations] to extend the life of aging Windows-based apps. Of course, being able to extend the life of those apps will, to some extent, reduce pressure to migrate those apps to Linux or replace those apps with Linux-based apps.”

Such IT organizations usually move to virtualization using their existing hardware, rather than bringing in more modern, highly scalable hardware, said Golden. In these cases, there is less need to move to Linux. This strategy and the efficacy of using old hardware will be short-lived, in his opinion.

Microsoft-centric shops will also be encouraged to stay that way if Microsoft delivers the promised virtualization-friendly licensing terms for its upcoming Longhorn-plus-hypervisor release, said John Lair, business development manager for Prowess Consulting.

Linux may not gain even if Microsoft’s operating system and virtualization platform price tags are more than those of Linux and, say, Xen, according to Fletcher.

“There is a chance that the savings gained from consolidation will actually work to make Linux’s lower software acquisition costs less of a selling point,” Fletcher said. “Higher licensing costs for Windows aren’t as much an issue when fewer servers are running.”

Then again, Lair and others noted, virtualization will probably decrease the importance of operating system (OS) selection, shifting attention to application and virtualization platform choices. Kamini Rupani, product management director at Avocent, summed up this side of the equation, saying:

“Virtualization doesn’t help or hinder adoption of either Linux or Windows on the server side, because virtualization isn’t directly related to operating systems. Virtualization is about the hardware, about adding more virtual machines running on top of an existing hardware environment.”

In this point-counterpoint discussion, others said that Linux stands to gain even if virtualization devalues OS selection. These folks think that Linux will be the power, or platform, behind the scenes in virtualized enviroments.

“Linux is so easy to use and reliable that I think it will be used ubiquitously and not get much attention,” said Klein. “People won’t care that their VMs are running on Linux. Choosing Linux will stop being a big deal. Also, I believe that the majority of virtual appliances will be running on Linux, so that people will just drop them in without a thought about which operating system is inside.”

If this scenario plays out, Linux will return to its roots as a stealth OS. IT managers brought Linux into IT shops through the proverbial back door to use for applications that didn’t need top-level approvals. While it moved up to a more visible position in data centers, Linux also infiltrated cell phones and numerous other devices without fanfare. Today, Linux appears to be a front-runner as ISVs’ top OS choice for virtual appliances. Perhaps even Microsoft’s resistance is futile.


July 26, 2007  10:03 AM

Server virtualization pushes storage virtualization

Ryan Shopp Ryan Shopp Profile: Ryan Shopp

We recently published a story by Alex Barrett about NetApp and VMware working together so that clients can get a consistent copy of a VMware virtual machine using NetApp’s snapshots, which use the disk array.

 Then, Byte and Switch published a story about HBA vendors such as Emulex and QLogic pouring their energy into improving storage virtualization (see Virtual HBAs Hitch Servers & Storage) with Virtual HBAs. From the story:

“Virtual HBAs are supposed to make it easier to manage VMs in SAN environments.”

Has the server virtualization inferno finally caused a storage virtualization spark in the virtualization industry? You be the judge.


July 24, 2007  3:24 PM

Virtualization Fashion Update: Thin is In!

Marcia Savage Marcia Savage Profile: Marcia Savage

IT professionals may wear many hats in their organizations, but we tend not to be known for our fashion sense.  To assist in that area, I’d like to cover one of the latest styles in virtualization: The return of the thin client.  Case in point: see Alex Barrett’s coverage of HP’s acquisition of thin-client vendor Neoware, Inc: Virtualization informs HP’s Neoware Acquisition.  It’s time for traditional fat desktops to start becoming even more self-conscious.  Of course, thin clients never really went away – they’ve been around since the popularitzation of the “network computing”, which started in the late 90’s.  Lest any of you commit a social faux paus while strutting down your data center’s loading ramps, I wanted to point out some of the issues that prevented the predicted takeover of thin clients:

  • Cheaper desktops:  Reducing hardware acquisition costs were a goal for thin client proponents.  As desktop computers hit the sub-$500 range, however, the cost advantages of using thin client computers became far harder to justfiy. 
  • Fatter apps and OS’s: A while ago, I heard someone ask the most pertinent question I’d heard in years: “Is hardware getting faster faster than software is getting slower?”  The answer, my friends, seems to be “no”.  As hardware gets more capacity, OS’s and applications tend to swallow it up like a supermodel at a salad bar.
  • Single points of failure:  Thin clients (and their users) rely on centralized servers and the network that allows access to them.  Failures in these areas mean major downtime for many users. 
  • The Application Experience:  Remote desktop protocols could provide a basic user experience for the types of people that use a mouse to click on their password fields when logging on to the computer.  Single-task users adapted well to this model.  But what about the rest of us?  I’d like the ability to run 3-D apps and use all of my keyboard shortcuts.  And, I’d like to be able to use USB devices such as scanners and music players.
  • Server-side issues:  Server-side platforms from Citrix, Microsoft, and other vendors had limitations on certain functionality (such as printing). 

So, is it possible for these super-skinny client computers to address these issues?  I certainly think it’s possible.  Server and network reliability has improved over the years, forming a good basis for reliability.  Thin clients are inexpensive, and server-side hardware and software has improved in usability features.  For example, Windows Server 2008’s Terminal Services feature provides the ability to run specific applications (rather than the entire desktop) using a remote connection.  And, multi-core processors that support large amounts of RAM help enable scalability.  Overall, thin clients are cheap dates, they’re more readily avaialble, and they’re less needy than in the past.  What IT admin wouldn’t like that?  Only time will tell if this relationship will last.

Oh, and one last fashion tip: Don’t throw away your old fat clients just yet.  Like so many other fads, they may be back in style sooner than you think.  Order a slice of cheesecake and think about that!


July 23, 2007  7:19 PM

Server consolidation via virtualization: Advice on pitches, multi-purpose server conversion and P2V

Jan Stafford Jan Stafford Profile: Jan Stafford

Burton Group analyst Chris Wolf shared some good advice about
consolidating servers with virtualization in our recent interview
. Here are some quick tips gleaned from our conversation and some more-info links and questions for you about these topics.

Making a pitch

Make these key points when pitching server consolidation via virtualization to upper management:

  • Virtualization is a means to running fewer physical servers and, thusly, consumer less power in the data center.
  • With fewer physical servers, hardware maintenance and upkeep costs go down.
  • Virtualization increases server availability via dynamic failover enacted at the virtual machine level. So, any application oncan support high availability, and that is a big difference with virtualization compared to traditional clustering solutions.
  • (Have you made this pitch? What did you say? What were the results? Let me know in the comments below or by writing to jstafford@techtarget.com.)

    Converting multi-purpose servers to VMs

    Watch out. This is tricky territory, says Wolf.

    “When I have multi-purpose servers, I generally want to take each application or service on that server that I need and run it as its own VM instance. So, in those cases, you are better off manually reprovisioning those services as separate virtual machines again; because in a dynamic failover environment, the VM itself is the point of failover. So, if I have a multi-purpose server, if I am looking at failover, every application on that server is going to be off-line for the period of the failover. If I have a single application per virtual machine, if the VM fails over now, only a single application would be down.”

    (Wolf talks more about this process in the interview. Has anyone out there tackled multi-purpose server-to-virtualization conversions? If so, please share your experiences with me at jstafford@techtarget.com.)

    Physical-to-virtual (P2V) migration

    There are several approaches, says Wolf. Some common practices that work in small environments — such as manually staging a VM and migrating the data and relying on a backup product to help with the migration — are not a good fit for larger data center environments. When migrating many servers, use a product designed for that job to do do a hot clone of a virtual machine.

    “Not only does it let me move each VM in a live state, I can schedule when the VMs get converted so I can do a conversion during off-business hours.”

    More P2V info can be found here:

    SSV’s P2V news and expert advice;
    Measuring the success of your server consolidation project.

    Got other good P2V links or advice? Let me know: jstafford@techtarget.com.


July 23, 2007  3:16 PM

Is Xen ready for the data center? Is that the right question?

Barney Beal Barney Beal Profile: Barney Beal

Article after article and post after post have compared and contrasted Xen, VMWare, Veridian, and a host of other virtualization technologies, with opinions on performance, management tools, implementations, etc., etc. in abundant supply. Inevitably when it comes to Xen, the story comes full circle with some sort of declaration about “data center readiness.” The definition of “ready for the data center” is quite subjective, of course, based largely on the author’s personal experience, skills, and their opinion of the technical capabilities of those managing this vague “data center” to which they are referring.

Sadly, most seem to think that IT professionals managing the data center are buffoons who are somehow incapable of working with anything that doesn’t include a highly refined set of GUI tools and setup wizards. Personal experience shines through when an author balks at the notion of editing a text or XML configuration file – a common task for any system administrator. Consequently, a declaration of immaturity is often the result, without regard for the performance or functionality of the technology. In the case of Xen, this is particularly prevalent, as the Xen engine and management tools are distinctly separate. In fact, there are already several dozen management and provisioning tools available and/or in-development for the highly capable Xen engine, at varying degrees of maturity.

And yet, I can’t help but think that comparing features of management tools is completely missing the point. Why are we focusing on the tools, rather than the technology? Shouldn’t we be asking, “where is virtualization heading” and “which of these technologies has the most long term viability?”

Where is virtualization technology heading?

To even the most passive observers it has to be obvious that virtualization is here to stay. What may not be so obvious are the trends, the first being integrated virtualization. Within a year, every major server operating system will have virtualization technology integrated at its core. Within a few short years, virtualization functionality will simply be assumed – an expected capability of every server class operating system. As it is with RHEL now, administrators will simply click on a “virtualization” checkbox at install time.

The second trend is in the technology, and that is the “virtualization aware” operating system. In other words, the operating system will know that it is being virtualized, and will be optimized to perform as such. Every major, and even most minor operating systems either have or will soon have a virtualization aware core. Performance and scalability sapping binary translation layers and dynamic recompilers will be a thing of the past, replaced by thin hypervisors and paravirtualized guests. Just look at every major Linux distro, Solaris, BSD, and even Microsoft’s upcoming Veridian technology on Windows Server 2008, and you can’t help but recognize the trend.

Which of these technologies has the most long term viability?

Since we now know the trends, the next logical step is to determine which technology to bet on, long term. Obviously, the current crop of technologies based on full virtualization, like KVM and VMWare (it’s not a hypervisor, no matter what they say,) will be prosperous in the near term, capitalizing on the initial wave of interest and simplicity. But, considering the trends, the question should be, “will they be the best technology choice for the future?” The reality is that, in their current state and with their stated evolutionary goals, full virtualization solutions offer little long term viability, as integrated virtualization continues to evolve.

And which technology has everyone moved to? That’s simple – paravirtualization on the Xen hypervisor. Solaris, Linux, several Unix variants, and, as a result of their partnership with Novell, Microsoft will all either run Xen directly or will be Xen compatible in a very short time.

Of course, those with the most market share will continue to sell their solutions as “more mature” and/or “enterprise ready” while continuing to improve their tools. Unfortunately, they will continue to lean on an outdated, albeit refined technology core. The core may continue to evolve, but the approach is fundamentally less efficient, and will therefore never achieve the performance of the more logical solution. It reminds me of the ice farmers’ response to the refrigerator – rather than evolving their business, they tried to find better, more efficient ways to make ice, and ultimately went out of business because the technology simply wasn’t as good.

So then, is Xen ready for the “data center?”

The simple answer is – that depends. As a long time (as these things go, anyway) user of the Xen engine in production, I can say with confidence that the engine is more than ready. All of the functionality of competing systems, and arguably more, is working and rock solid. And because the system is open, the flexibility is simply unmatched. Choose your storage or clustering scheme, upgrade to a better one when it becomes available, use whatever configuration matches your needs – without restriction. For *nix virtualization, start today.

For Windows virtualization, the answer is a bit more complex. Pending Veridian, the stop gap is to install Windows on Xen with so-called “paravirtualized drivers” for I/O. Currently, these are only available using XenSource’s own XenServer line, but will soon be available on both Novell and Red Hat platforms (according to Novell press releases and direct conversations with Red Hat engineers.) While these drivers easily match the performance of fully virtualized competitors, they are not as fast as a paravirtualized guest.

Of course, you could simply choose to wait for Veridian, but I would assert that there are several advantages to going with Xen now. First, you’ll already be running on Xen, so you’ll be comfortable with the tools and will likely incur little, if any conversion cost when Veridian goes golden. And second, you get to take advantage of unmatched, multi-platform virtualization technology, such as native 64bit guests, and 32bit paravirtualized guests on 64bit hosts.

So what’s the weak spot? Complexity and management. While the engine is solid, the management tools are distinctly separate and still evolving. Do you go with XenSource’s excellent, yet more restrictive tool set, a more open platform such as Red Hat or Novell, or even a free release such as Fedora 7? That depends on your skills and intestinal fortitude, I suppose. If you are lost without wizards and a mouse, I’d say Xensource is the way to go. For the rest of us, a good review of all the available options is in order.

What about that “long term?”

So we know that virtualization aware operating systems are the future, but how might they evolve? Well, since we know that one of the key benefits of virtualization is that it makes the guest operating system hardware agnostic, and we know that virtualization aware guests on hypervisors are the future, then it seems reasonable to conclude that most server operating systems will install as a paravirtualized guest by default, even if only one guest will be run on the hardware. This will, by its very nature, create more stable servers and applications, facilitate easy to implement scalability, and offer improved performance and manageability of platforms.

As for my data center, this is how we install all our new hardware, even single task equipment – Xen goes on first, followed by the OS of choice. We get great performance and stability, along with the comfort of knowing that if we need more performance or run into any problems, we can simply move the guest operating system to new hardware with almost no down time. It’s a truly liberating approach to data center management.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: