The Virtualization Room

Oct 8 2007   1:34PM GMT

Scaling Up or Scaling Out, Revisited

Joseph Foran Profile: Joe Foran

Some time back, before I was invited on as a blogger for SSV, I was interviewed by the always-fun-to-work-with Adam Trujillo about Virtualization in the Data Center, and, like all good writers, Adam left the best question for last:

“What about hardware decisions — should data center managers be considering scale-up instead of scale-out?”

My response was:

“I personally prefer a scaled-up approach because there is a reduction in ongoing costs, such as power, space, cooling, and physical maintenance. Also, the complexity factor is reduced when there is less hardware to manage. An exception to that would be data centers without existing centralized storage — the initial acquisition becomes more expensive in scale-up operations if a SAN infrastructure is not already in place.”

I’m guilty of being one of those people that says “Durnit, why didn’t I say this or that?” or “Dangit, why didn’t I quantify that a little more?” even well after the fact, making me perhaps my own worst critic. In this case, I really felt I left some stuff unsaid. One item that irks me about that answer is that I should have made more mention of blades. I hate blades in their current incarnation. I think they’re the worst idea in IT – they’re hot, cramped, delicate, with slower components and limited expansion ports – if you name something about a blade, I can find a reason to hate it. That said, I shouldn’t have left them out of my line of thought – a good IT Manager needs to consider uncomfortable things, difficult things, even distasteful things, when looking at something impactful. Or so says the wisdom of Frank Hayes, to whose articles I often find myself nodding to the affirmative while reading. So, here goes.

Blades are hot – they have limited cooling options built-in. That’s often a “value-add” (choke) of specialized rack systems and chassis systems provided by third-party vendors. Here’s a few links to illustrate the point:

A rack of big-honkin’ boxes will make you feel toasty on the parts next to their fans. A rack of blades will cook you medium-well given enough time. To prevent the data equivalent of multiple mini-supernovas you need to install the correct cooling – the correct tonnage of AC, hot and cold rack aisles, proper ventilation, air temperature monitors, system heat monitors, etc. In many data centers, the cost of new construction (or re-construction) may very well exceed even long-term cost savings from server consolidation, and even if you can afford the construction and still come out with positive ROI, that cooling comes at a monthly utility cost – you must increase your power consumption to keep things cool.

That said, this is where virtualization has been proven out over the last decade as a way decrease the number of servers and offload them to blades. That may mean that you can remove enough servers to use your existing heat management systems in a more focussed way and not have to break the bank. Even if it’s a five-to-one ratio of servers removed to virtualization-equipped blades added, you’re coming out ahead. Add in centralized storage systems to connect to the blades and the scales may well tip back in favor of Mr. Heat Miser again, but probably not. Getting a ten-to-one ratio means blades are a winner. This is assuming a large server consolidation via virtualization project. If it’s not a big percentage of your boxes being affected, you’ll be back in the hot seat, quite literally.

Ever need five or more NICs for a virtualization host? I have. If I had blades, I’d be using three blades to get that done, assuming dual nics, and five or more on single-nic blades. That means more blades, more virtualization software licenses I don’t need, more hardware to fail, and more physical boxes when what I want to do is REDUCE the number of physical boxes. Right now server blades are still too young – many vendor’s products have all the components are included on the blade, and not modular enough. PC blade systems have it a little better – some limited peripheral connectivity at the user-site (see this link for one manufacturer’s solution), but still, it’s an entire box in a chassis with all the difficulties of expanding that micro-sized PCs and laptops have.

So, I think it’s safe to say that I still hate traditional blades. But I think they’ll be the saviour of the data center soon, and then I will love them. Why? Because here’s my ideal blade system: a truly modular system that will change everything about blades. The best part, it’s available now from several of the larger vendors. The changes are part of a new design “paradigm” (please note my bias against that word) – the end-result is a blade system where the blades can be NICs or other devices, as needed and plugged into the chassis, connected in either a physical layer with ye olde jumper or a software layer (in the chassis management software, perhaps). Lets say I get a blade and I need to put ESX on it, but I need six NICs because of guest system network i/o requirements… ok, I get another blade with a quad-NIC on it, plug it into the chassis, and configure it – voila, a single computer with five or six NICs in two blade slots, using one license. Or perhaps I need ten USB connectors for some virtualized CAD desktops, which require USB key fobs in order to use the CAD software – I plug in a server blade and a USB blade, configure it, and voila, one server, ten USB ports, one license. Expand that out far enough, and you can have whatever you need in terms of peripherals in a blade chassis. If you go to IBM’s website, you get a whole panopoly of choices – switchblades (that one always give me a chuckle) and NIC blades are readily available for expanding your blade chassis out to do more than just host some servers. HP upstages them a bit and has a great product out now that provides PCI-X and PCI-e ports. This is from their website:

“Provides PCI-X or PCI-e expansion slots for c-Class blade server in an adjacent enclosure bay.

  • Each PCI Expansion Blade can hold one or two PCI-X cards( 3.3V or universal) ; or one or two PCI-e cards(x1, x4, or x8)
  • Installed PCI-X cards must use less than 25 watts per card. Installed PCIe cards must use less than 75 watts per PCIe slot, or a single PCIe card can use up to 150 watts, with a special power connector enabled on the PCI Expansion blade.
  • Supports typical third-party (non-HP) PCI cards, such as SSL or XML accelerator cards, VOIP cards, special purpose telecommunications cards, and some graphic acceleration cards.”

This is interesting – a couple of PCI-e quad-NICs in one of an expansion unit and my NIC requirements are set. Or perhaps a couple of PCI-e USB add-in cards. Or a high-end PCI-X or PCI-e video card. Ok that gets troublesome when you need a lot of them – you can wind up with one blade and a chassis full of expansion slits containing video cards – the cost might not be worth it.

In any case, this dramatically changes my view on scaling up or out. Right now, I still stand for scaling up because blades don’t work in my enviornment – I have heat problems. I have space problems too, which blades could solve, but not with my heat problems. I prefer to buy larger-sized servers with lots of expandability (DL300 and 500 series, PowerEdge 2000 and 6000 series, etc.) and add in NICs as needed rather than buy blades or 1U boxes because I can do more with these larger-sized machines even though they take up more room. I fully expect that to change in the future – at some point I see myself stopping with the scaling up and starting with the scaling out – only I expect the “out” part of that will involve a lot less real estate and more options than currently available.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: