The Virtualization Room

A SearchServerVirtualization.com and SearchVMware.com blog

» VIEW ALL POSTS Feb 8 2008   1:40PM GMT

Linus: wake up and smell the coffee



Posted by: Ryan Shopp
Tags:
grids and mainframes
Virtualization

This post is from Mark Schlack, VP/Editorial for TechTarget.

Linus Torvalds dismisses virtualization in a recent interview with the Linux Foundation:

“It’s been around for probably 50 years. I forget when IBM started offering virtualization on their big hardware. Maybe not 50 years, but it’s been all around for decades and it’s very interesting in niche markets – I think the people who expected to change things radically are just fooling themselves.”

For the record, virtualization is closer to 40 years old – you can read a fascinating article about the history of mainframe virtualization on wikipedia. More to the point, the attitude that there’s nothing new under the sun (except the innovation being pumped by those saying that) has always puzzled me, and the notion that modern virtualization is just a replay of the mainframe has now started to bug me.  It makes no more sense than to say that because MIT and a fair number of people in the scientific community helped develop mainframe virtualization by sharing code with each other and IBM (who gave them software to pilot), that open source is nothing new.

Of course there’s a link between mainframe and x86 virtualization. Conceptually and practically they have a lot in common, and so on. But it’s the differences that are compelling and that will lead to the radical changes Torvalds discounts.

I’m no expert on mainframe partitioning, but from what I’ve gathered over the years (please, correct me if I’m wrong), here’s what sticks out for me:

  • Mainframes, circa 1970, cost around $100,000 a month to lease. You actually couldn’t buy one if you wanted to. You can virtualize a heck of a lot on a $20,000 box today. Actually, you virtualize several systems on a $1,000 box. There wasn’t much you could buy for a mainframe that didn’t start with five figures.
  • As big an advance as the virtualization built into System 370 was, it only ever worked with IBM operating systems (And I believe, for a time, at least, only with IBM apps before the government forced IBM to open up to third-party software companies.). All of today’s x86 contenders can host multiple Linux distributions, multiple versions of Windows, Solaris and some also handle the Mac OS. Mainframes never even ran the other IBM platform operating systems.
  • The notion of an entire virtual machine contained in a file, portable from machine to machine, regardless of their hardware configuration, is new to the current wave. Also new: dynamically reassigning VM resource on the fly, moving VMs without restarting the hardware, and failover clustering of VMs.

I doubt that’s a comprehensive comparison, but the point is clear. IBM’s mainframe virtualization was certainly a niche feature, used by timesharing providers (the 1970s version of hosting companies). It was also used for niche application — according to wikipedia, mainly by scientists who needed a more interactive environment than the batch-oriented general purpose mainframe operating systems of the time were geared for. But the current crop is exactly the opposite – a generally useful tool that will impact all but niche applications.

Current virtualization’s main link to the mainframe, IMO, is that it is enabling mainframe-style utilization, reliability and ultimately, process-oriented management, on the very commodity platform that rendered that world asunder. If that sounds like back to the future, it isn’t. It may well represent the final triumph of general purpose, commodity-based computing over the highly specialized, batch-oriented world of the 1970s mainframe. It’s actually kind of cool to realize that the first microprocessors were being developed right around the time mainframe virtualization made its appearance, and now the two technologies are converging.

Torvalds goes on, in his interview, to talk about the truly radical developments on the horizon being new form factors. I don’t see it that way, but I’d be very surprised if new form factors don’t ultimately wind up using virtualization as a base technology. Think of a cell phone that can be completely upgraded with new capabilities because its software is a virtual appliance you download wirelessly. Now that’s a radical idea.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: