Server Farming


May 26, 2009  2:28 PM

Forty-foot Unix history poster will dominate your office

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

For a mere $340, you can send a major message to your office-mates about how much you love Unix. A new poster from Leighton Jones at Calgary-based Floating Point Digital Images depicts Eric Levenez’s diagram of the Unix operating system with fractal art by Alan Tenant. You can download the Unix history chart for free here.

Photo of forty-foot Unix Banner by Floating Point Digital Images

Photo by Floating Point Digital Images.

Why buy this tear and weather-resistant 10lb poster? According to the purveyors, “it could be as simple as the desire to wrap yourself several times in its informational goodness that documents the history of 1000+ versions of more than 150 different Unixes.”

Found this link at The Register.

May 21, 2009  6:21 PM

Shocking news of the day: Intel delays Itanium again

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Here’s a release I got over email:

Here is an update to the Tukwila Itanium® schedule.  As you know, end users choose Itanium-based servers for their most mission-critical environments, where application scalability is paramount.  During final system-level testing, we identified an opportunity to further enhance application scalability.  As a result, the Tukwila processor will now ship to OEMs in Q1 2010.

In addition to better meeting the needs of our current Itanium customers, this change will allow Tukwila systems a greater opportunity to gain share versus proprietary RISC solutions including SPARC and IBM POWER.  Tukwila is tracking to 2X performance versus its predecessor chip. This change is about delivering even further application scalability for mission-critical workloads.  IDC recently reported that Itanium continues to be the fastest-growing processor in the RISC/Mainframe market segment.

Illuminata analyst Gordon Haff has some details on the history of Itanium delays (yes, just Itanium delays have a history all their own) and some analysis.


May 13, 2009  7:40 PM

Intel breaks another record; biggest anti-trust fine ever

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Intel just broke another record – this time for getting the largest anti-trust fine ever.

The European Commission (EC) slapped Intel with a $1.45 billion fine for violating EC Treaty antitrust rules against engaging anti-competitive practices that excludes competitors from the market.

The European Commissioner for Competition Policy stated throughout the period covered by the decision, Intel held at least 70% of the worldwide market in x86 server CPUs, and used anti-competitive practices to hold that position.

“The fact that Intel had such a large market share is not a problem in itself. What is a problem is that Intel abused its dominant position. Specifically, Intel used illegal anti-competitive practices to exclude essentially its only competitor, and thus reduce consumer choice, in the worldwide market for x86 chips,” the commissioner, Neelie Kroes, told press. “Given that Intel has harmed millions of European consumers by deliberately acting to keep competitors out of the market for over five years, the size of the fine should come as no surprise.”

The EC found that Intel gave rebate to computer manufacturers who bought all, or almost all, of their x86 CPUs from Intel. Intel also made direct payments to a major retailer on the condition that it only sell computers with Intel x86 CPUs.APC Mag image

Intel also paid computer manufacturers to halt or delay the launch of specific products containing competitors’ x86 CPUs and to limit the sales channels available to these products, according to the EC.

Intel also faces another anti-trust lawsuit, filed by AMD for similar anti-competitive practices in the U.S. The court date for that trial is in February 2010.

As expected, Intel’s CEO Paul Ottellini denied any wrongdoing and is appealing the decision. In a statement, Ottellini said, “As we go through the appeals process we plan to work with the Commission to ensure we’re in compliance with their decision… there should be no doubt whatsoever that Intel will continue to invest in the products and technologies that provide Europe and the rest of the world the industry’s best performing processors at lower prices.”

And some U.S. based legal pros issued statements today saying the EU’s fine was far too harsh.

“The EC’s use of huge fines against market-leading firms – fines calculated from a firm’s world-wide sales, not from harm to European consumers – discourages aggressive competition that benefits consumers,”  Ronald A. Cass, Chairman, Center for the Rule of Law, said in a statement. “Consumer harm should be the concern for competition law, and here instead consumers saw sharp declines in cost and increases in product quality – even Intel’s complaining rival, AMD, enjoyed historic success during the period it claims Intel’s actions foreclosed competition.”

But the manufacturers concerned by Intel’s conduct in the EC case – Acer, Dell, HP, Lenovo and NEC – aren’t playing the violin for Intel right now, and reports say Intel’s closest competitor, AMD, is celebrating the EC’s decision.

And the EC’s commissioner doesn’t seem to feel bad about the massive fine either. In his closing statement to the press, he drew attention to Intel’s latest global advertising campaign, “Sponsors of Tomorrow,” in which Intel invites visitors to add their ‘vision of tomorrow’ to their website.

“Well, I can give my vision of tomorrow for Intel here and now: “obey the law,” Kroes said.

As large a fine as $1.45 billion is, it’s really a drop in the bucket for Intel; they reported $7.1 billion in revenue for the first quarter of 2009 alone,  so I doubt this will have any affect on their ability to churn out CPUs on the tick-tock cycle. The real issues for Intel is the tarnish the EC’s decision puts on them and that it takes the focus away from their technology – two side effects that are sure to help the competition gain some ground in the CPU market.


May 12, 2009  6:48 PM

Rackable acquires Silicon Graphics, takes SGI name

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Rackable Systems completed its acquisition of Silicon Graphics, Inc. on Monday and oddly enough, Rackable will adopt SGI as its global name and brand, instead of the other way around.
Silicon Graphics logo
Rackable closed the transaction to acquire the debt-ridden Silicon Graphics (SGI) for $42.5 million in cash on May 8 and the company said it will change its name to Silicon Graphics International – or SGI – but will keep the Rackable Systems product line and ticker symbol (RACK) the same.

I wonder if abandoning the Rackable brand in favor of SGI is a good idea. Sure, SGI was hugely successful in the 1980′s and is still a more recognizable brand than Rackable because of its legacy, but SGI is also a failing brand that filed Chapter 11 bankruptcy in 2006 and again in April 2009 due to unmanageable amounts of debt.

In fact, I liken SGI to the captain of the high school football team; you know the type – a leader in its time, popular, and admired by all, but 20-some years later? Balding, broke, and holding desperately to what used to be.

Maybe I’m being too critical of SGI’s brand, but others question Rackable’s decision to take the SGI name as well. SGI is “Well known, sure. But more than a bit tarnished and not descriptive of Rackable’s business,” said Illuminata Analyst Gordon Haff.

Coincidentally, Sun Microsystems Inc. was founded the same year as SGI – 1982 – and they, too, are being acquired this year, by Oracle Corp. (Oracle won’t be dropping its name for Sun Microsystems though. )

Either way, Rackable now has a much larger portfolio of high performance computing products, with SGI’s x86 cluster offerings, shared memory clustered compute products, scalable data center and storage technologies, modular data centers, data management software, HPC tools and visualization technologies.

SGI will maintain its corporate headquarters in its current Fremont, California facility, with offices around the world, and the new management team will have senior executives from Rackable Systems and the former SGI.


April 30, 2009  1:10 PM

IBM paying Sun users to abandon SPARC following rejection

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Maybe Sun Microsystems shouldn’t have said no to IBM’s acquisition offer, because it appears Big Blue doesn’t like rejection.

IBM doubled the dollar amount for its Power Rewards migration services specifically for Sun customers that switch from Sun SPARC, UltraSPARC or SPARC64 processor-based servers to IBM Power Systems.

Perhaps IBM is just taking the opportunity to catch Sun defectors from the upcoming Oracle takeover, but when IBM talked about acquiring Sun, there was speculation that the company would kill SPARC systems, which compete with IBM’s Power Systems.

So now that IBM can’t outright squash Sun’s SPARC system, the company hopes to get rid of SPARC another way; by giving Sun users an offer they can’t refuse. The GodFather

IBM increased the money it offers to switch from competitive systems to IBM through its Power Rewards Program from $4,000 to $8,000, based on the number of SPARC, UltraSPARC or SPARC64 microprocessors used in each Sun or Fujitsu server.

Customers can use the migration services to move workloads running on Sun SPARC-based servers to IBM’s AIX, Linux or i operating environments.

For example, a Sun customer using a Sun Fire V890 system with eight microprocessors (or “cores”) would now receive $64,000 in migration services, up from $32,000, to move the workloads to an IBM Power system.

Since Oracle has not addressed their plans (or returned calls and emails to answer questions) about the fate of Sun’s hardware and SPARC systems under Oracle that this incentive will probably work quite well.


April 17, 2009  2:24 PM

Boston Marathon website prepares for traffic surge

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

About a million people will visit the Boston Marathon website on Monday to check out the 113th annual race, so the IT pros supporting the website have been hard at work these past few weeks making sure the site doesn’t crash that day.

The Boston Athletic Association (BAA)’s technical director, John Burgholzer, spends three weeks prior to the race building the infrastructure at a colocation facility in Massachusetts to support the BAA’s website and other technologies surrounding the marathon, like the new AT&T Athlete Alert System, which delivers text messages to people who are tracking runners whenever their runners hits a checkpoint.

Burgholzer, who owns a technology consultancy company in North Reading, MA called Information Overload, uses Hewlett Packard (HP) blade servers to run everything. Interestingly enough, he doesn’t go the virtualization route, which would probably be quicker and easier, because he doesn’t know enough about it or trust the technology to handle the surge of users during race time.

“We haven’t tried out virtualization at all and I’m not sure we would. We get about 50,000 to 60,000 concurrent connections at peak time during the race, and I’m not sure virtualization would work for us performance wise,” Burgholzer said.

I’ve heard this apprehension about virtualization before, so it appears the technology is not as pervasive as companies like VMware would have us believe. Mainly because guys like Burgholzer are far too busy to learn about an entirely new technology, especially when their traditional approach works just fine for them.

So, Burgholzer adds seven HP ProLiant blade servers to the two that are typically used to run the website, for a total of nine blades running Windows 2003. HP blade servers were the right choice for the BAA because the servers require little space and are easier to manage that rack mount systems, plus, the organization had been using HP gear even before Burgholzer came on board nine years ago, and HP has always been “extremely helpful” at race time, he said.

Before the BAA moved from “a bunch of pizza boxes” to HP c-class blade servers in 2007, cabling and management “was a nightmare,” Burgholzer said. “We would build the data center up before the race using rented systems and people didn’t really care how it was set up, so we had a rats’ nest of cables in the back of the rack,” he said.

By switching to blade servers, the cabling is not an issue; he just slides new blades into the chassis as needed, and the management software makes configuartion easy, he said. The chassis has one gigabyte Ethernet connections on both the front and back ends of the server chassis, which he says are plenty, and he uses F5 Networks technology for load balancing.

With all of that, he’s confident there won’t be any issues with the website on Monday – knock on wood. “We have a pretty well-tuned website now; there was a bit of a bandwidth problem in 2007 and at the peak of the race we have seen the website running slower, but we have gotten it down now,” he said.

The day after the race, Burgholzer will start looking for ways to improve the website and new features to add for next year.


April 2, 2009  2:40 PM

Google’s server recipe no longer secret

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Looks like Google has finally spilled the beans on its server design, which the company has kept secret for years.

According to a report, Google let everyone see its design at a conference this week. The system was a 3.5 inch thick 2U system with two processors, two hard drives, and eight memory slots mounted on a motherboard built by Gigabyte. Google uses x86 processors from AMD and Intel, and the servers are powered by 12-volt batteries in case there’s a problem with the main source of electricity, according to the report.

[kml_flashembed movie="http://www.youtube.com/v/xgRWURIxgbU" width="425" height="350" wmode="transparent" /]

We already knew that Google uses 12-volt power supplies for its servers, which the company says is more than 90% efficient compared with typical server efficiencies at or below 70%.

For the most part, Google runs these servers not out of brick and mortar data centers, but from those shipping containers that have become all the rage with vendors like Sun Microsystems , IBM, HP and Rackable Systems in recent years. Google was putting data centers in mobile containers long before those guys, and typically uses standard 1AAA shipping containers to house around 1,160 servers each, according to the report.


March 24, 2009  8:08 PM

Intel set to release “Nehalem” Xeon processors

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Intel may launch its next generation multi-core Xeon processors, code-named Nehalem, on Monday.

The company sent out invitations to a live webcast on March 30 “for the launch of a groundbreaking new server architecture.” If that doesn’t give it away, some server vendors have already announced products based on the Nehalem processors, including Cisco, which will use the Intel Xeon CPU’s in their upcoming Unified Computing System’s blade servers.  Rackable Systems already introduced CloudRack systems based on Nehalem, and Dell is expected to introduce Nehalem-based systems this week.

In earlier disclosures about Nehalem chips for x86 servers, Intel said the processor will have two, four or eight processing cores and provide better scalability than previous generations. It will also have scalable cache sizes and simultaneous multithreading, or Hyper-threading, which is already available on other Xeon processors.

While Intel prides itself on introducing multi-core processors at a faster pace than competitor AMD, some of the most significant enhancements to the new Xeon processor have existed in AMD chips for years.

For example, one of the major changes with Nehalem is integration of the memory controller into the CPU. This replaces the legacy Front Side Bus, which is a known culprit in traffic bottlenecks issues. AMD has been offering an integrated memory controller –called Direct Connect Architechture — in its Opteron CPUs for years now.

Another feature in Nehalem is the QuickPath Interconnect (QPI), which will give the chip faster access to a lot more bandwidth. This feature is similar to AMD’s HyperTransport technology, which has been around for a number of years as well.

That said, by adding QPI and an integrated memory controller, Nehalem will have access to a lot more bandwidth than its predecessors without relying on tons of cache memory, according to an ARS Technica report on Nehalem.

More importantly, what all of this means for end users is significantly better performance for applications that can take advantage of multithreading and multiple processing cores.


March 20, 2009  3:27 PM

Is open source affiliation keeping upstart systems management tools out of the enterprise?

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

According to IT management blogger John M. Willis, upstart systems management vendors like Zenoss and Hyperic need to tone down their open source rhetoric and take a page from their competitors like Solar Winds and Nimsoft.

“Zenoss and Hyperic beachfront with open source too much and it keeps them out of the enterprise,” Willis said. “Stop it with the open source stuff. Stop even mentioning it. Solar Winds is kicking your butt all over the place and all they’re talking about is price and performance.”

Willis isn’t advocating that Zenoss and Hyperic drop open source altogether, rather make it a line item instead of a headline.

“Almost everybody I talk to in enterprise IT management isn’t keen on open source,” Willis said. “If there’s a team that wants to run Nagios, then they usually can get a checkmark on it if they don’t have anything already in place. But if you want to rip and replace Tivoli with Zenoss, management will say ‘Eh…. I’m not sure about that.’”

If a company is going the Nagios, Zenoss or Hyperic route, they’re going hook line and sinker, according to Solar Winds senior VP Kenny Van Zant. “They’ll suffer the manual cost of configuration and maintenance that open source will bring, because they don’t even have $5000 to spend,” Van Zant said. “Or people are open source fans who want to use open source wherever they can. When there is a gap on Zenoss, the fill it with nTop, Cacti, or some other open source product of the month and integrate them all together.

“We bump up against those free tools when the open source person leaves the company,” Van Zant said. “We replace Nagios deployments. It just takes too much to keep it up and running.”

Are you willing to bring open source systems management tools into your shop? Did your management team object? Email me or leave feedback in the comments.


March 17, 2009  5:36 PM

Cisco’s Unified Computing System strategy; smart move?

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

I tuned in to Cisco’s web-based news conference yesterday to hear about their first server platform within the Unified Computing System, and my eyes are still rolling today.

Instead of showing off the new system – which they refer to as “the new movement” – with some demonstrations, we watched 90 minutes of Cisco’s CEO John Chambers and partners Intel, BMC, Microsoft, EMC and VMware congratulating each other on being masters of the universe. Good thing I had that barf bag nearby.

After Cisco and its partners were done talking about how revolutionary this new system is and how much they love each other, one reporter basically asked, where’s the beef? “We have been hearing about the California server for weeks now, but you haven’t mentioned anything about a server. Is this announcement related to that?,” he asked.

Before Chambers let his trusty engineer answer the question, he thanked all of his partners again. The Cisco engineer then reiterated  their strategy with this system while carefully avoiding the term “blade server” because the system is more than just that. And round, and round we went.

Bottom line, the system is a chassis full of Cisco UCS B-Series blades bundled with networking, storage and virtualization features. Take the pieces apart, and you have Cisco’s first blade servers. Some people may also have found it interesting that the Intel Nehalem-based servers come in both full and half depth options, so you can pack a ton of the half-depth boxes into a chassis (assuming they don’t throw off crazy amounts of heat).

So the fact that Cisco’s talking-head-style news conference was absolute torture doesn’t make the system itself any less interesting from a server market perspective. We already know their networking stuff works, so they really just have to prove themselves with some solid server engineering to compete with the existing x86 providers. (Cisco, I know you say you aren’t competing with those guys, but you are).

And in many ways, Cisco has come full-circle by introducing a server, said Anne Skamarock, a research director with the analyst / consulting firm Focus.

“When I worked at Sun Microsystems back in the mid-1980′s they debated becoming a Cisco putting intelligent switches (read: specialized servers) in the network. So in a very real sense, Cisco has been building servers for years – servers designed specifically for the work of switching,” Skamarock said. “If you think about it, the first “blade servers” were produced in the networking space years ago adding a form factor for multiple switches from the horizontal to the vertical.”

Cisco also talked about how much this system will save companies because it “radically reduces the number of devices and the required setup, management, power/cooling, and cabling,” but they didn’t talk about the acquisition cost. A Cisco spokesperson said they can not release any pricing details until April, but I am betting it is not a small number.

Even so, if Cisco has engineered a solid server and the system as a whole proves to be of good value, the Unified Computing System concept will catch on, but we aren’t sure when these systems will actually hit the commercial market.

And I’m sure server vendors like HP, Dell and IBM will follow suit with their own “me-too” unified systems similar to Cisco’s. Actually, those companies may even end up using their top networking partner for the plumbing. After all, in terms of virtualization, Cisco has come up with important technologies like VLANs and VSANs, which are now industry standards.

The way I see it, by creating this “new” market of Unified Computing Systems, Cisco is setting itself up for success in both the networking market and the server market.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: