Server Farming

July 14, 2009  3:21 PM

IBM has most energy efficient supercomputers

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The 5th annual list of the most energy efficient supercomputers was published recently. IBM took the top four spots and 18 out of the top 20. The Green500 list is dominated by BladeCenter clusters and the Power-based Blue Gene supercomputer.

The list is compiled and sorted by megaflops per watt. The winner was a BladeCenter QS22 Cluster out of the University of Warsaw in Poland running at about 536 megaflops per watt. Incidentally, the same cluster is ranked 422nd in the TOP 500 supercomputer list.

Perhaps most impressive of all the supercomputers is a BladeCenter QS22/LS21 cluster run by the U.S. Department of Energy. It is #1 on the TOP 500 list and #4 on the Green500 list.

July 14, 2009  9:17 AM

Report: Intel to release new server chips in early August

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

According to a report in Digitimes, which cites anonymous industry sources, Intel will launch four more Xeon server chips early next month.

The quad-core chips include processors in the 5500 (ie. “Nehalem”) and 3000 Xeon chip line. Let’s take a look:

  • W5590: You can see here that the quad-core W5580 runs at 3.2GHz with 8MB of cache and consumes 130 watts. Expect the W5590 to have a slightly faster clock speed, as that seems to be the pattern.
  • L5530: The L5520 is a lower-power quad-core chip (60W) running at 2.26GHz with 8MB of cache. Again, expect a slightly higher clock speed.
  • W3580: From the specification page for the 3000 series, you can see that the W3570 is a quad-core chip running at 3.2GHz with 8MB of cache and consuming 130 watts.
  • W3550: The W3540 is a quad-core chip running at 2.93GHz, 8MB cache, 130W.

A couple months ago, Intel showed off the so-called Nehalem-EX chip, an eight-core processor from the Xeon 7400 series that is expected out later this year.

June 29, 2009  6:59 PM

Anti-trust regulators not giving Oracle-Sun early clearance

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

According to Reuters, U.S. antitrust regulators aren’t giving early clearance on Oracle’s plan to buy Sun Microsystems, due to what Oracle lawyers are calling a narrow, technical matter.

The article hinted that the problem may center around Oracle controlling Sun’s Java technology, which Oracle’s competitors like IBM rely on. But Richard Jones, vice president of data center services at The Burton Group said the way the Java licensing is written, IBM won’t have much trouble.

“And if Oracle starts changing the Java license, they shoot themselves in the foot,” Jones said. “It’s an open field day for Microsoft’s .Net if Oracle screws around with Java licensing.”

According to Jones, the antitrust regulators are taking a harder look at the acquisition because Sun employees rallied some Sun customers to file complaints to antitrust regulators on anti-competitive business practice claims.

“A lot of this has to do with the rumors coming out of Sun,” Jones said. “We’ve heard the advanced research team in StorageTek has been put on hold, and the SPARC design team has been completely canceled. You can imagine that people looking at their livelihood disappearing will take action.

“When the Department of Justice sees something like this, they take a little extra time investigating. But it’s individuals worried about their jobs, not a truly anti-competitive situation,” Jones said. “Until Hewlett-Packard bought the consulting firm EDS, IBM was the only soup-to-nuts IT provider in town. I see this acquisition as an opportunity for more competition.”

For more info, check out Jones’ predictions for Sun’s assets and customers.

June 22, 2009  3:33 PM

HP users mostly apathetic to Oracle-Sun merger

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

LAS VEGAS – Most of the Hewlett-Packard users that I spoke to at the HP Technology Forum last week were mostly apathetic about Oracle acquiring Sun, and didn’t foresee any competitive advantage because of it.

There were exceptions. Ernest Cody, a senior systems architect at Raytheon, said his company uses a lot of both HP and Sun gear. “We know that if we have a problem with one, we can move from one to another.” Cody doesn’t foresee having that same level of flexibility once Sun becomes part of Oracle.

On the plus side, Cody said Raytheon runs “pretty much everything Oracle sells,” and so he sees some potential benefits if Oracle decides to tune Solaris so that it works better with its database products.

On the reseller side, the sentiments were similar. John Maus, a high availability and storage architect for HP reseller Systems Technology Associates, Inc., said he doesn’t deal with his customers’ Sun installations, if they have them. Still, he said, his customers do work a lot in Java, and “we don’t want that restricted by Oracle at all.”

But the merger is affecting decisions, even of those users not currently running Sun hardware or software. Tony Bergen, the director of technology solutions for The North West Company, a food retailer based in Winnepeg, is looking to replace 25 PA-RISC servers. Normally he might consider Sun and Solaris as a third option behind HP and IBM. But not now.

“We don’t have any internal experience with Solaris, and there’s that added uncertainty in the Oracle acquisition on what the future is,” he said.

June 18, 2009  12:01 AM

Powering up your blade servers: Some interesting numbers

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

LAS VEGAS — Today I attended a session at the HP Technology Forum on powering blade servers, and heard some interesting data points on how much system configuration matters to power consumption.

Tony Harvey, HP product manager for blade server power and cooling, gave one particularly interesting example. In it, he compared two configurations:

  • Configuration 1: 16 blade servers using Intel quad-core 2GHz E5335 processors, 8 GB RAM (8 x 1GB), an additional dual-port Ethernet, a Cisco Ethernet switch and a Brocade FibreChannel switch.
  • Configuration 2: 16 blade servers using Intel quad-core 2GHz L5335 processors, 8 GB RAM (2 x 4GB), an additional dual-port Ethernet, an HP Ethernet switch and an HP Fibre switch.

On the surface, these look like practically identical configurations. And yet the first configuration uses 30% more power at 100% utilization, and about 15% more power when idle. Why? Because of the subtle differences:

  • Both use a quad-core Intel processor running at 2GHz, but the L5335 uses less power than the E5335.
  • Using two 4GB DIMMs instead of eight 1GB DIMMs is more efficient
  • Harvey said using the HP switches is more efficient than the Cisco and Brocade versions.

It doesn’t end there. Harvey continued on, saying the application you run also affects your blade servers’ power consumption. HP tested a configuration of 16 blades running various applications, and found that power consumption varied from about 4 kilowatts for a rendering application, to more than 7 kilowatts for Linpack, one of the most power-hungry application there is.

Earlier in the presentation, Harvey mentioned that a full rack of blade servers — that’s four chasses — could consume as much as 35 kilowatts. I asked him afterward what that configuration would include:

  • 32 BL 2×220 server nodes per chassis/enclosure
  • 32 GB RAM per server
  • Infiniband in every blade
  • 2 Infiniband switches
  • Run Linpack and smoke that baby! (Note: Harvey never said “smoke that baby!” That was me.)

Is that a realistic production configuration? Harvey said no. The only time he’s ever seen that kind of configuration is when a company like HP is trying to score well on the Linpack benchmark, or when they’re doing a burn-in to stress-test the systems.

June 16, 2009  5:56 PM

Sun Microsystems scraps Rock project as Oracle ownership nears

Bridget Botelho Profile: Bridget Botelho

Sun Microsystems scrapped development efforts for the 16-core “Rock” UltraSparc processor, which the company once hoped would help them compete against chips from IBM and Intel, according to the New York Times.
Rock processor
This news surfaces as the Oracle Corp. takeover of Sun nears. Many IT pros have expressed concerns about the future of Sun technologies under Oracle, and one could assume that this will be the first of many Sun projects and products to hit the chopping block when Oracle gets a hold of Sun.

Sun has invested more than five years and billions of dollars on the project and had hoped to use the chips in its high end systems. The Rock was supposed to start shipping in 2008, but was delayed several times due to glitches, according to the Times article. 

Sun has not verified that the Rock project has been cancelled, but this wouldn’t be the first time Sun aborted a chip after years of work and billions of dollars in research and development. Back in 2004, Sun ditched efforts for the UltraSparc V and Gemini chips and in 2006, Sun killed “Serrano” chips to focus their efforts on the Niagra and Rock chips.

Sun will come under Oracle leadership once the acquisition deal closes, sometime this summer.

June 4, 2009  4:15 PM

Cisco’s timing for Unified Computing System – a tad off?

Bridget Botelho Profile: Bridget Botelho

Does anyone else think it is a bad idea for Cisco Systems to enter the server market when the industry is experiencing the most significant year-over-year sales decline in history?

Worldwide server revenue declined 24 percent in the first quarter of 2009 and shipments dropped 24.2 percent compared to the first quarter of 2008, and no one went unscaved; all of the top five global server vendors – IBM, HP, Dell, Sun and Fujitsu/Fujitsu Siemens – saw double-digit revenue declines for the first quarter of 2009, according to Gartner, Inc.

Worldwide: Server Vendor Shipment Estimates, 1Q09 (Units)




1Q09 Market Share (%)



1Q08 Market Share (%)

1Q08-1Q09 Growth (%)







Dell Inc.












Sun Microsystems






Fujitsu/Fujitsu Siemens






Other Vendors












Source: Gartner (June 2009)

Meanwhile, Cisco is marketing the hell out of its upcoming Unified Computing System (UCS), which is rumored to start shipping in a couple of weeks. The company has been offering tid bits of information about UCS through webcasts for months to build anticipation for the system. For instance, yesterday. Cisco announced it would offer rackmount servers in addition to blades.

But once the drumroll for UCS dies and the system actually ships, who’s buying?

I would love to be a fly on the wall in a Cisco executive meeting to hear their strategy with UCS. Do Cisco executives really think this is a good time to introduce an entirely new server system? And are they arrogant enough to think they can beat IBM, Dell and HP at their own game?

June 3, 2009  10:22 PM

New open source IT management tool: Lighter-weight than Nagios, more granular than Cacti

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

Theo Schlossnagle, CEO and founder of managed services and hosting provider OmniTI, hopes to solve some of the common complaints with open source systems management tools with his company’s new tool Reconnoiter.

OmniTI manages 15 data centers with heterogeneous architectures for multiple clients, and Schlossnagle said he’s used every tool under the sun: Zenoss, Tivoli, OpenView, Nagios, Cacti and more.

The recurring problems Schlossnagle found with open source management tools — scaling issues, repeated effort for configuration management, and requirements for powerful server infrastructure – frustrated his team to the point where OmniTI built its own toolset for monitoring metrics, graphing data for capacity planning, and post-mortem analysis of problems.

The tool uses an agent-based system. Users would install a Noit Daemon in each important portion of infrastructure and configure it to monitor different services. The software is written in C, plug-ins are written in C or Lua. Reconnoiter uses SNMP, ITMP, HTTP among other protocols.

The company is offering it under BSD license on its Website for free.

According to Schlossnagle, challenges using the open source management software Nagios were a major driver for developing the Reconnoiter tool.

“Nagios is quite inefficient in the way it collects data,” Schlossnagle said. “It follows the age-old Unix philosophy that you use the right tool for each job. This means that Nagios ends up launching thousands of small applications to test things. While the lots of little tools philosophy is often convenient, it heavily conflicts with high performance, low latency requirements. Often purpose built tools need to take over in that role — that is what Reconnoiter is.

“I have to buy a big, expensive box to run Nagios — I don’t with the Reconnoiter agents,” Schlossnagle continued. “Nagios does fault detection, but not trending — which means I have to double my efforts by configuring both Nagios and another tool.”

Schlossnagle also said Nagios’ monitoring was centralized, so it was difficult to adding checks in the field. Managing configurations was hard to track as you deployed new services and machines.

The Reconnoiter tool polls systems to see if they’re healthy in a similar way that Nagios does, but of the open source-commercial hybrid products that are out there, Schlossnagle said the product is most similar to Hyperic.

“Hyperic takes a more holistic view of monitoring in that it includes both trending and fault detection. Reconnoiter takes this approach as well.”

The Reconnoiter tool is also designed to help IT managers analyze Web traffic events in a very granular way, even ones that happened in the distant past. “RRDTool is specifically designed to retain data within size constraints. You define how long you wish to retain data on various granularities,” Schlossnagle said. “In most systems that use rrdtool (like Cacti) recent data (like one week) is retained on five minute granularity, while data older than a week is reduced to a granularity of one hour. So, if you want to compare a spike today to one from six months ago, it is very likely that you have a defeating skew: 288 five-minute intervals for “today” and four six-hour intervals for the day in question six months back.”

Reconnoiter approaches this by taking the stance that storage is cheap. “There is not excuse for throwing any of that data away. I’ll go buy a terabyte of disk. I’m not going to search back 12 months very often, so it doesn’t need to be fast, but I need to be able to do it.”

According to Schlossnagle, watching the spike happen gives you a better understanding how traffic patterns shift during a major event, for example a Web site being picked up by a large social media site like Digg.

“If I’m looking at that spike on my systems at thirty second granularity, I can tell you how fast that spike happened. If I use the RRD tool with Nagios and Cacti, I can only see that day at that level of granularity for about six hours.”

This tool can help IT managers plan for capacity during spike scenarios and compare to events in the past.

“Our primary goal was to make our lives easier. This tool replaces an enormous amount of headache at OmniTI,” Schlossnagle said. “Making it a successful open source tool makes it even easier. One of the short term goals it to have it adopted other places and get the tool deployed in large environments.”

Today, OmniTI is slowly introducing Reconnoiter to its managed services clients. The company is currently monitoring tens of thousands of metrics across five data centers, approaching a terabyte of metric data.

OmniTI does not plan to develop a commercial version at this time, like Hyperic or Zenoss. “An open source approach with a strong community is better,” Schlossnagle said. “I don’t want to be in the tools business. If a company wants to give us money for support and indemnify them with IP rights, we won’t turn away that money.

“The key difference being the product we deliver, support and indemnify, would be the same product, not the one that has special neat features that paying customers get.”

You can give Reconnoiter a test run at

May 29, 2009  4:47 PM

Server sales tank as folks stretch hardware life cycles

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

According to IDC’s recent server report, server sales are plummeting due to the slumping global economy. According to IDC, this is the lowest quarterly server revenue since the analyst firm began tracking the server market on a quarterly basis 12 years ago.

The commodity server market took a big hit in general, while Dell specifically had a bad start to 2009. Dell’s server revenue declined 31.2% year over year. Even blade servers slumped, for the first time since IDC had tracked blades.

According to our 2009 data center economy survey, IT pros are stretching stretching server life cycles, putting off buying new hardware.

Servers are typically replaced every three years. Two-thirds of IT shops have extended the production life of server deployments in 2009. More than 35% say they’ll keep servers in production for six months to a year longer, 34% say they’ll extend server life cycles by two years.

May 26, 2009  3:49 PM

Quest for power efficient servers leads vendors to PC chips

Bridget Botelho Profile: Bridget Botelho

What was once a battle over who could offer the fastest, most powerful server has turned into a competition over which servers can operate on the fewest watts, leading vendors to put PC chips in x86 systems.

Scottie courtesy Image Shack

"I just can't do it captain, I don't have the power!"

This trend is the result of data center power constraints; IT folks simply don’t have abundant quantities of power to supply their servers any more, but still need to add more compute capacity somehow.

So, server vendors are on the hook to offer systems that operate on very little power, and every vendor wants the right to say they offer “the worlds most efficient” server. But it appears they hit the limit with the level of efficiency today’s x86 server chips can offer, so some vendors have moved on to PC chips.

For example, last week, Dell launched new servers with Taiwan-based Via Technologies Via Nano processors, which the companies say offer the best power efficiency of any processor on the market today. Prior to that, Intel’s Atom processors for notebook computers were launched in SuperMicro servers.

My response was, really? PC chips in servers? Sounds great if you only plan to run Tetris on your servers, because when you choose lower watt chips, you trade off performance.

But it appears certain markets are willing to make that compromise. According to Dell, their new Via Nano-based systems are designed for “hyper-scale customers in the search engine and Web hosting businesses…who typically choose general purpose 1U servers or low-end tower servers, and make compromises around the density, power, and/or manageability aspects associated with these alternatives.”

It will be interesting to see how far these PC-chip servers go, who adopts them and how Intel and AMD respond to the Via Nano, product-wise. Will Intel and AMD try to leapfrog Via’s Nano chip with something that consumes even less power? I’m going to take a guess and say, hell yeah. But really, how low can they go?

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: