If it’s not one thing, it’s another. Today, it’s data centers’ power and cooling hassles. Tomorrow, according to researcher Jerry Murphy, “your next big problem will be managing your service-oriented architectures (SOAs).” Murphy, senior vice president and service director for Robert Francis Group, thinks out-of-control SOAs could be a bigger problem than power and cooling.
Murphy advises IT managers to get control of their SOAs today, or else your “SOAs will have gaping huge security holes through which you could drive a truck.”
Here at the the Server Blade Summit in Anaheim, Calif., the main talking points are blades and virtualization; but, in a session on proactive data center management, Murphy looked beyond immediate problems of power and cooling on blades to a future when, he predicts, a service glut will cause similar problems as those caused by server proliferation.
“Services are even harder to track than servers. They’re dynamic and flexible, but also complex. It’s harder to find out why a service fails.”
SOAs are a blessing and a curse, according to Murphy. They offer reusable code and better integration, in addition to agility in moving services as needed. But that agility makes it hard to predict where services will be moved to.
“SOA design is focused from the top-down, and infrastructure is built from the bottom up,” Murphy said. To gain better control, IT managers need to take SOAs’ top-down design and integrate it with bottom-up modeling tools. Also, he said to be conservative on capacity requirements and proactive with ongoing performance monitoring at different tiers in the infrastructure.
As seen by most IT managers, blade servers are hot little power-suckers. Cooling hassles and power costs are the main reasons why IT managers don’t buy blade servers, according to the new TechTarget Data Center Group survey of over 250 IT professionals. In fact, most of the respondents’ companies aren’t using blades today. I’ll be presenting that bad news and some good news from our 2007 Server Decisions Survey next week at the Server Blade Summit in Anaheim, Calif., which focuses on blades and virtualization.
While there I’ll be asking blades proponents, users and prospective users about power and cooling issues with blades. Just to give you a preview, here’s a quick look at both sides of the story.
In a recent Q&A with Focus Consulting president Barb Goldworm, I recalled that an IT manager told me he ran only half the number of blades a chassis could hold because the servers would overheat otherwise. Goldworm, author of a new book on blades and virtualization, responded, saying:
“In the earlier days of blades, cooling was a big issue, and many users ran half loaded. The past year has seen significant improvement in power and cooling efficiencies and management. In some data centers, cooling may be an issue; but, in many datacenters, there are lots of things that can be done to improve cooling and allow blades to be easily incorporated in the datacenter. In addition, chip, blade and power/cooling vendors are still working on this issue, with improvements continuing to come.”
Vendors agree with Goldworm, says TechTarget news writer
Bridget Botelho, and say that blades throw off less heat than traditional rack servers.
IT managers tell us a different story. After visiting a number of data centers, SearchDataCenter.com site editor Matt Stansberry sums up users’ experiences with blades servers:
“Per unit, one blade may technically throw off less heat than one rack server, but you clump them all together in a chassis, and they mess up your entire cooling strategy. It’s the same with power. Per blade unit, they demand less power, but the problem data center managers run into is that they can’t deliver all of that energy into 19 square inches.”
Plenty of vendors, consultants, users and prospective users of blades will be sounding off on this subject next week. I’ll let you know what they say. In the meantime, what do you have to say about these hot little power suckers? Tell me so I can tell the bladesters in Anaheim the real story. My email is email@example.com.
Reuters is reporting that VMware has filed with U.S. securities regulators to raise up to $100 million in an initial public offering of Class A common stock. As expected, EMC will retain approximately 90% of the company via ownership of non-public Class B shares, which are entitled to 10 votes per share, compared with one vote per share for the Class A offerings. The company has not decided whether to list on the New York Stock Exchange or on the NASDAQ. EMC CEO Joe Tucci has been quoted as saying the IPO could happen “as early as late June.”
I chatted with Vance Loiselle, the vice president of marketing at BladeLogic yesterday, who told me about BladeLogic’s new Virtualization Manager module for its Operations Manager suite, a configuration management tool. Talking with him gave me a little bit of insight in to what management software vendors mean when they say that traditional management tools aren’t equipped to deal with virtual environments.
Without the Virtualization Manager module, Loiselle explained, Operations Manager can “see” virtual machines, but not their specific hardware configuration, and their relationship with the underlying physical server. “We’ve always been able to see within the VM, but now we can see it on the physical server and drill down to see its CPU allocation, memory allocation, storage and network adapter configurations, and the virtualization software configuration,” he explained.
BladeLogic claims 220 customers including large enterprises such as Merck and GlaxoSmithKline. Fortune 1000 companies like those have held off moving virtualization in to production until they had the same management tools in place for virtual environments that they enjoy in the physical world, Loiselle said.
There’s word from SWsoft of a starter pack for its Virtuozzo operating system virtualization software. Designed for new users, the $1,198 bundle includes a license for single or dual-processor system, management tools, and a year of Silver level support and maintenance. Also included is VZP2V, SWsoft’s physical-to-virtual tool that helps import a a server from a dedicated physical box to a Virtuozzo virtual environment.
An SWsoft spokesperson claims that the starter pack represents savings between 33% and 50% off the full price.
Early adopters of blade servers, in general, didn’t give blades rave reviews. As happens with many new technologies, stories about users getting burned — in this case almost literally, as early blades ran hot, hot, hot — have circulated and made some IT directors shy away from buying them.
Well, Burton Group analyst Chris Wolf tells me that times have changed, and buyers of 2007 blade server models should have a completely different experience. I talked to Chris at this week’s TechTarget Server Virtualization Seminar in San Francisco. He says that new products have dealt the early overheating, power, management and configuration problems.
SearchServerVirtualization.com’s resident blades expert, Barb Goldworm, has been spreading the word that old blade power, cooling and management issues are being addressed by vendors. Says Goldworm:
“Blade vendors have made great progress in the current generations of blade systems, improving power and cooling efficiencies significantly. IBM says that they have increased the efficiency of their power supplies from 65% efficiency in their 1U servers to 91% efficiency in BladeCenter H. (65% efficiency means that they convert 65 % of the power at the wall and the rest goes into room in the form of heat. In other words, for each 1kW from the power provider, 650 W goes to the server, and 350 W goes into room). Blade vendors are continuing to work on heat issues, with a variety of options for cooling at the blade, rack and room level.”
IT managers need some convincing, however. Rack servers do a good job, they said in our recent 2007 Server Decisions Survey, and most of the 250+ respondents plan to buy more rack servers this year. Less that one-fourth will buy blades this year. Watch for more news about blades and that survey, as I and Matt Stansberry (editor of SearchDataCenter.com) will be presenting the results at the Server Blade Summit on May 1.
Another survey, by Enterprise Associates, looked beyond 2007 and and indicated that blades will be widely adopted by 2009..
Let’s don’t rely on surveys, though. How about sharing some first-hand experience? Are you one of those who will buy new blades after the technology matures, or did you buy early? Tell me about it in a comment here or via email at firstname.lastname@example.org.
Thanks to Tim for suggesting these links for our readers:
- Microsoft builds on Windows Server Datacenter Edition’s reliability and scalability with unlimited virtualization rights
This link (from the Microsoft Website) explains the changes to DataCenter Edition that took place as of October 1st, 2006.
A collection of links to Microsoft resources on their licensing policies and changes.
After reading Alex Barrett’s post Microsoft being Microsoft, I was curious what our virtualization experts thought of the whole ordeal. Here’s what they said.
Microsoft has a long history of delaying, and then delaying again their products. One could hold out hope that this is to keep the products in a controlled test environment long enough to eliminate all bugs and produce a better product. However, Microsoft also has a history of producing products that are bug-ridden and full of annoyances and incompatibilities with previous releases. That said, they also have a history of increasing the stability and usability of their products over time with patches and service packs. Windows XP SP2 is the most stable OS I have ever used. Microsoft is not alone in the software world when it comes to releasing buggy code, they just seem to have the most scrutiny. I think that Windows Server 2007, whenever it is released, will have its share of the expected issues that people have come to know and associate with Microsoft. However, I think that future patches and updates will turn into a server OS that will be a proud successor to Windows 2003. Should Microsoft delay it? History says it won’t matter, but there is a first time for everything.
-VMware expert Andrew Kutz via email
According to the article in question, it sounds like the delay is not that significant. My theory is that this was done to allow some synchronization of code between it and what’s going into the updated Server Longhorn beta. Another thing comes to mind when you said “controlled test environment”. There’s a lot of things in Virtual Server that are going to become key technologies in Longhorn Server, and the better they have those things working correctly ahead of time, the easier it’ll be to build the bigger and more important infrastructure of Longhorn around them. VS 2005 R2 SP1 has been hanging over people’s heads for a while, and I suspect they wouldn’t do that unless a) they wanted to make sure installing it didn’t sour people’s experiences with VS 2005 in general, and b) it contained some key things that were going to be rolled forward into Longhorn, and they wanted to make sure those pieces worked properly right now.
-Microsoft Virtual Server expert Serdar Yegulalp via email
You can read virtualization management expert Anil Desai’s thoughts on his first blog post, Viridian – Better Late than… Early?
Unwilling to cede the market to VMware, Virtual Iron is getting in to the VDI (virtual desktop infrastructure ) game by partnering with Provision Networks, maker of Virtual Access Suite (VAS) connection broker software.
The problem with traditional fat clients that Virtual Iron and Provision are trying to eliminate are numerous: they consume a lot of power, they’re a security and regulatory problem, and “they’re an absolute nightmare to administer,” said Mike Grandinetti, chief marketing officer at Virtual Iron. Server-based hosted desktops, however, suffer from none of those problems, and benefit from additional high availability features. “We’re believers that [VDI] could absolutely take a large and rapidly growing server virtualization market and take it to a different level,” Grandinetti said.
As is often the case with Virtual Iron announcements, the announcement centers heavily on price. Rather than price per node, Virtual Iron and Provision are opting to charge $120 per desktop. That compares quite favorably with VDI prices offered by VMware and Hewlett-Packard, which don’t include a connection broker. Compared with traditional desktops, meanwhile, a Virtual Iron/Provision VDI bundle has a total cost of ownership of less than half, according to Gartner data.
Grandinetti claims the two companies are working with several “proof of concepts” with companies whose total desktops number in the tens of thousands.
The big news today is that the Hypervisor for Windows Server Longhorn (codenamed Viridian) is being delayed. See the Windows Server Division Blog for the main details. On the surface it seems like this is bad news. Many of us would like to move to the new Hypervisor ASAP. From a strategic standpoint, I think it’s most important for Microsoft to ship a rock-solid first version of the Longhorn Server Hypervisor. So is Microsoft doing the right thing?
When I first heard about the goals for Viridian (which were then pretty closely guarded), I thought that this “feature” was enough to warrant a new release the Windows Server platform (post-Longhorn Server). At the very least, it could have commanded a “Virtualization Edition” of the product. It’s no small architectural task to provide for the dynamic addition of hardware, support for large parallel processing, dozens of VMs, etc. Goals include the ability to easily transition from at least Microsoft Virtual Server (and, perhaps, competing solutions – the names of which may or may not begin with the letters “V” or “X”). And, it’s a new product – this isn’t a rebranding of another platform.
Microsoft states that scalability (and related testing) is a major reason for the delays. Given that quality (measuring by reliability, stability, performance, etc.) is not up for compromise, I would have preferred to see Microsoft do a smaller initial release of the Hypervisor. Perhaps an initial “lite” version that focused on the architecture of the product would have been a better approach. While running on 64-CPU machines is definitely a plus, I’d rather have a more scaled-down version of the Hypervisor available earlier. That would allow for determining migration paths, better understanding capacity planning, and for performing initial testing. With that available, Microsoft could then focus on scalability to very large environments.
Regardless of the release timing, I think we’ll all eventually look back and say that Viridian was worth the wait. Until then, we’ll have to rely on the many first- and third-party products that are available today.