May 21, 2007 11:27 AM
Posted by: Joe Foran
, Joseph Foran
Ok, if you got that joke, you either are or were a long-haired hessian from the 80′s, just like I was. First off – I’m sorry I’ve been silent for so long. I’m buying a house, my wife and I are expecting again, and I’m hiring staff as well as kicking off LOTS of real projects at work. Anyway, with that, allow me to start the blogging again!
I just came back from the Intel Premier IT Professionals session in NYC, and while it was geared largely on the desktop space (apparently the Fall event will focus more on Servers), they spent some time covering virtualization and the new hardware coming out to support virtualization. The agenda covered Intel’s VT extensions that help improve system virtualization performance, and is such a key component of Virtual Iron’s and other Xen-based products. Without this (or AMD’s equivalent), there could be no way of making Microsoft operating systems run on Xen hypervisors. Intel also covered my grrr-item of the year – Windows Vista’s virtualization-friendly license that is friendly only for the Enterprise Edition, but I’ll grrr on that elsewhere. My favorite item of this year’s new hardware – Turbo Memory.
For those unfamiliar with Turbo Memory, it’s best described as this – picture a flash (NAND) drive that sits between your regular hard drive(s) and your CPU/Motherboard/RAM. There it acts as a cache for frequently used data (kind of like a CPU’s cache) and helps offload read/write to your hard disks, thereboy mitigating one of the last real bottlenecks in the architecture of modern PC-based systems. From my understanding, TM is tied to Vista’s ReadyDrive system for full functionality, but that will only last for so long before the concept moves into competitive production and other vendors figure out how to detach the TM concept from Vista and make it as invisible as normal hard disk cache. It hasn’t yet hit the server chips, but is expected to by this time next year. From a server virtualization point-of-view, this is important since disk I/O is one of the biggest problems with getting a large physical-to-virtual machine ratio. As each virtual machine is accessed, it calls for disk access to it’s virtual disks, as these requests go to the hardware via the hypervisor and/or host OS, they queue up, slowing down performance. The Turbo Memory concept is one that can be applied to help mitigate this problem. As it stands, I may just get a Vista Desktop with dual disks and Turbo Memory, load it up w/ 4GB RAM, and throw in a huge number of VMs via VMware Server and then via Virtual PC. Throw in some load simulation scripts, perfmon logging and a little elbow grease, and I might have some interesting numbers to show against an entry-level server running W2K3 and similar hardware. The test won’t be worth much on the books, since Vista’s got client-related limitation that make it an inefficient platform (10 connections, anyone?) for hosting virtual machines, but for the fun of it and for the raw numbers that show what Turbo Memory can do, it’ll be worthwhile.
While there I also met the Regional Director for Virtual Iron, who I’m going to be following up with to see about getting my Virtual Iron demo rolling.
Going back to my “poker-based” review system, I give turbo memory’s concept a solid 9 pokers.
May 17, 2007 1:29 PM
Posted by: Alex Barrett
, Virtualization platforms
I just returned from Microsoft’s WinHEC conference in Los Angeles, where I traveled in hopes of getting some juicy virtualization news. Alas, Microsoft dropped its virtualization bombshell last week – that it will eliminate key features from the Windows Server Virtualization beta that will ship with Longhorn — so no news was to be had, per se.
But even if Microsoft doesn’t have much to show in the way of shipping, competitive hypervisor, one thing I came away with is that the company is clearly paying extremely close attention to virtualization, and has its best minds on the job.
Speaking in a session called Virtualization Technology Directions, Mike Neil, Microsoft general manager in the Windows Server Group, said virtualization stands to be one of the key “scale-up” apps to run on a next generation of servers featuring multi-core processors, and thanks to 64-bit operating systems, terabytes of memory.
One positive aspect of being late to market with Viridian is that Microsoft can develop it to take advantage of hardware virtualization assist technologies going in to chips — CPU virtualization in Intel-VT and AMD-V, but also new features like MMU, DMA remapping, interrupt handling and I/O virtualization. “The software layer for virtualization doesn’t go away, but we do see the hardware taking on some capabilities that will make the virtualization layer thinner — and hopefully more robust as well,” Neil said.
Neil, as a former employee at Connectrix, the first virtualization company Microsoft acquired, recalled how “we didn’t have any hardware assist, and spent a lot of our time working around the limitations of x86 that made it very difficult to virtualize.” VMware, I’m sure, was in the same boat.
Taking advantage of hardware assist capabilities is what Microsoft calls “enlightenment,” which Neil further described as “an intimate arrangement between the kernel and the hypervisor.” But don’t think Microsoft wants to invite everyone in to this cozy party. “We’re not trying to drive this as a standard,” Neil said. Other companies, notably VMware, are espousing paravirt-ops, but not Microsoft.
That’s not to say that hardware assist solves everything — it doesn’t, not by a long shot. Some of the challenges Neil called out include the shift to network storage, increased data rates and ever-expanding storage requirements. When it comes to memory and I/O bandwidth, “there are changes that are going to need to occur,” Neil said. Today, “the straw that we’re sipping the data through is too thin.”
Another area of concern Neil cited is security. “It’s unfortunate that we’re in a situation where malicious software is a business. Hackers aren’t doing this because they are pranksters or its fun, but because they make money off it.” It stands to reason, therefore, that “the hypervisor, as the lowest level piece of software on a system,” is a place where people are going to look to compromise a system. “If I can get at the hypervisor, I can get at all the VMs,” Neil said. Furthermore, today, virtual machines do not know if they’ve been “hyperjacked,” he said; “understanding the layer of software beneath [the VM] will become more and more important.”
May 17, 2007 12:16 PM
Posted by: Jan Stafford
On May 31, 2006, University of Texas-Austin IT systems analyst Andrew Kutz made a prediction: People will soon be running Windows side-by-side with Mac OS X with no difference in the application space.
“It came true this year with Parallels,” said Kutz (who just joined Burton Group as an analyst) to me recently.
Kutz’s prediction appeared in Virtualization, like string theory, can be saved from its hype, one of his first columns for SearchServerVirtualization.com.
Parallels isn’t the only supporter of Mac OS X virtualization in town. On his blog, Kimbro Staken — CTO of JumpBox Inc. — discussed the other players in this space, saying:
“VirtualBox is a new entry in the virtualization space and is particularly interesting because it has been Open Sourced under the GPL license. This makes the Mac OS X virtualization space a three way race with Parallels, VMWare Fusion and now VirtualBox all having offerings available. Parallels is still the clear leader thanks to its head start and solid Windows integration, but the competition is definitely heating up.”
Kutz scored on this prediction, but I think his call on the outcome of this development is off base. First, let’s look at what he wrote. The column starts out with a bang:
“This article will show that, just as Dr. Edward Witten saved string theory by condensing many efforts and ideas into one elegant theory, Mac OS X is poised to do the same for virtualization by fusing the many implementations of virtualization into one practical and marketable consumer product.”
He doesn’t finish with a whimper:
“Apple is in the best position to become the new leader in a world of consumer virtualization. And they will do so with style, simplicity and elegance.”
Like most Mac enthusiasts, I think Kutz is over-optimistic. I don’t think virtualization, even via an open source product like VirtualBox, will push Apple out of its niches in the consumer market. Some power users — particularly in the graphics, video and music fields — will take advantage of the opportunity to run Mac OS X on commodity hardware; but mainstream users aren’t going to bother.
On the business side, corporate graphics departments will like this development, and their IT managers will enjoy the cost savings of not having to buy separate boxes for those folks.
May 15, 2007 2:57 PM
Posted by: Ryan Shopp
Since people are loving the useful links page Andrew Kutz pointed out in his last blog post, I thought I’d also toss in our own. This page, Fast guide: VMware how-tos includes advice on: how to install VMware on Linux, VMDK conversions, VMotion, how to install VMware on Windows, guest OS performance tips, VCB script additions, using .NET with the VI3 SDK, VMware Player, VMware ACE and more.
May 10, 2007 11:53 AM
Posted by: Jan Stafford
Most companies will run virtual machines on a mixture of server hardware types, but figuring out what app to run on each platform can be challenging, according to open source consultant and author Bernard Golden, a presenter at the Red Hat Summit, happening right now in San Diego.
After sharing his opinions on the pros and cons of the three main styles of server virtualization, Golden sounded off on the most-commonly-used hardware platforms for server virtualization. Here’s a summary of his analysis:
Server type: x86 32-bit
Example: Dell PowerEdge
Applications: Client virtualization; test and development environment
Pros: Widely available; inexpensive; IT skills widely available
Cons: Memory limitation; poor virtualization scalability
Golden says: “Repurposed machines save money in the short term, but they don’t scale very well. You need more robust memory, in particular.”
Server type: x86 64-bit
Example: HP BladeSystem
Applications: Client virtualization; midrange-to-large server virtualization
Pros: Powerful; similar skills to x86 32-bit; larger memory possible
Cons: May be limited in scalability depending upon machine design
Golden says: “64-bit blades are very powerful and offer high density, but they do pose power and cooling challenges.”
Server type: x86 64-bit specialized hardware
Examples: Sun SunFire; IBM System X
Applications: Large server virtualization deployments
Pros: Designed for high-performance scalability; large memory support
Cons: New hardware type for operations personnel; can be costly
Golden says: “This class of server offers the optimal virtualization platform for large-scale virtualization deployments, but their prices may be prohibitive for most organizations. You also have to figure in the cost of training your IT staff into the equation.”
Got questions about servers for virtualization? Disagree with Bernard’s assessments or have something to add? Bernard is a resident expert on SearchServerVirtualization.com and is available to respond to you. Please comment below or write to me at email@example.com.
May 10, 2007 11:32 AM
Posted by: Jan Stafford
, Microsoft Virtual Server
, Red Hat
, Virtual machine
, Virtualization platforms
, Virtualization strategies
Currently, there are three main styles of server virtualization, and each has its benefits and drawbacks, according to open source consultant and author Bernard Golden, a presenter at the Red Hat Summit, happening right now in San Diego.
His lowdown on the three ways to virtualize provides a handy guide to the options today. Following his list, I offer some links to definitions, how-tos, tips and news about each method.
By the way, besides being a resident expert on SearchServerVirtualization.com and SearchEnterpriseLinux.com Golden is president of Navica Inc., an open source consulting firm, and author of the new book, “Virtualization for Dummies”. Check out his views on server hardware for virtualization in this post.
Here are the top three ways to virtualize:
Virtualization style: Operating system (OS) “container” emulation
Examples: Solaris Containers; SWsoft
Pros: Efficient; does not require additional software
Cons: Isolation; dependent upon OS; limits version choice within guest OS types
Virtualization style: Hardware emulation
Examples: VMware Server; Microsoft Virtual Server
Pros: Relatively easy to install and use; true isolation of OS instances
Cons: Less efficient than paravirtualization
Virtualization style: Paravirtualization
Examples: Xen, VMware ESX, Microsoft Longhorn virtualization
Pros: High herformance; true Isolation of OS instances
Cons: Extra software layer; complex to install and administer
Don’t expect these ways and means to remain fixed in time. In five years, all operating systems will be virtualized, simplifying every aspect of server virtualization from planning to upgrades, Golden predicts. Even better, built-in operating system virtualization will make it very difficult for application software vendors to respond to every helpdesk call by blaming the VM.
For more information on the three top ways deploy server virtualization, check out these resources:
For an overview, read Alessandro Perilli’s analysis of virtualization vendor strategies.
Here’s some info on OS container emulation:
IBM DB2 runs on SWsoft Virtuozzo virtualization; Virtuozzo sidesteps Windows Server costs; Sun boots Unix partitioning on Solaris; and Sun commits to Xen.
Get the scoop on hardware emulation: VMware Server on Linux: Installation through management; Optimizing Microsoft Virtual Server 2005; and emulation defined.
For more on paravirtualization, go to: Paravirtualization with Xen; Xen defined; How-to: VMware ESX, Linux virtual machines and read-only file systems; and Virtualization in Red Hat Enterprise Linux 5.
Which style of virtualization do you use? What questions would you like to ask our resident expert, Bernard Golden, about server virtualization strategies? Tell all by commenting on this post or writing to me at firstname.lastname@example.org.
May 8, 2007 11:41 PM
Posted by: Jan Stafford
, Blade servers
In this quick Q&A, analyst and SearchServerVirtualization.com blade server columnist Barb Goldworm offers her views on the news from vendors and users at last week’s Server Blade Summit, which she chaired.
SSV: How big a deterrent to buying blade servers is power and cooling, based on your observations at the Summit? What cool things are being done about it?
Goldworm: Power and cooling and space are issues for most users, even in trying to expand their rack-n-stacks. Many of them were there because they know they have to do SOMETHING, because they can’t go on like they are. Often there is a list of easy (and not expensive) steps which can be taken, before going to more drastic measures (like liquid cooling). Planning help is available from folks like Eaton and APC, as well as HP and IBM, and others. Advances in hardware and software are continuing to come, with smarter power management, shutting down unneeded processors based on utilization, etc. Processing power per watt is continuing to improve.
SSV: Were virtual desktops — via appliance virtualization, VDI (virtual desktop infrastructure) and other models — hotter than you thought, in terms of interest?
We expected virtual desktops to be a hot topic and it was. As people get more comfortable with server virtualization, and start looking at Vista on desktops, virtualization for the desktop and applications are becoming a serious topic. I view this area as a continuum, with different approaches offering benefits for different use cases (from VDI to Citrix to the new IBM workstation blade). I think we’re hitting the tip of the iceberg here.
It’s hot and users are struggling to understand how it all fits together.
SSV: Looking back at the Summit, what are your overall impressions about the state of blades and virtualization after the Summit?
Goldworm: People have been hearing more about blades for the past year or two, often with a lot of warnings. Many came to the summit looking to get a better understanding of the benefits and the “gotcha’s” and were pleasantly surprised with the progress made in the past year, particularly relative to virtualization. Many of the customers we spoke with were very excited about the benefits that blades and virtualization could bring them, and many seemed to be hearing up-to-date information for the first time (including from their own vendors like IBM, HP and VMware).
As users and channel partners are getting more educated, we will see more and more of the marriage between blades and virtualization.
May 8, 2007 1:46 PM
Posted by: Akutz
A nice forums fellow felt like posting a whole bunch o’links to the VMware forums. Very nice. Go check it out and clicky. http://www.vmware.com/community/thread.jspa?threadID=81191
May 6, 2007 12:23 PM
Posted by: Jan Stafford
, Blade servers
Why use blade servers when your rack servers aren’t giving you any hassles? I met up with Craig Newell at the Server Blade Summit this week, and he gave some answers to that question. I’ve put them in the list below.
Newell has more field experience working with blades than anyone else I met at the summit. As U.S. Client Services Manager for Halian, Inc., — a U.K.-based global IT services organization –he has worked on blades implementations in banking, pharmaceutical, government and other types of businesses.
The top 5 reasons to use blade servers:
1. They’re tiny. Blades conserve data center floor space better than any other server option. If your floor space is at a premium, then check out blades.
2. They’re dense. Combined with virtualization, blades give you the most compute power per square inch of any server.
3. They’re easy to deploy. Today’s blade server toolsets allow for ease of server deployments. The cabling, power and much more are built into the chassis, so there’s less to do when you slip the box in its slot. Virtualize and speed of deployment increases more.
4. They’re a good fit for lab environments. “Blades and virtual servers provide great architectures for lab, testing, and development environments,” Newell said.
5. There will be no more snakes on your plane! Those cables roping around your data center will disappear, as blades have far fewer power and network cables.
Put all these uses and benefits together. Mix well. Then, watch TCO get TKOed. Typically, corporate processes significantly increase overall server deployment time, leaving you with lower overall total cost of ownership, even though upfront costs may be higher.
Here’s the big if, and and but:
“Power and cooling concerns are real! The power consumption/square foot in a blade-based data center are significant…like 25,000 watts per chassis.”
So, do your homework, and evaluate cooling requirements and power consumption as a part of your overall cost for hardware deployment.
“Returns take numerous years due to the significant capital required within a data center environment,” Newell said. “Smaller environments may see faster returns.”
In other words, good things come to those who plan, deploy and wait.
Want more info on reasons to or not to use blades? Check out these links:
Why wed blade servers to virtualization?; Barb Goldworm’s guide to blades and virtualization; Former Morgan Stanley exec praises blades; and Blade servers dominate market by 2009.