Server Farming


January 7, 2009  7:30 PM

Rackable Systems ships 12 V server motherboards; no 12 V enclosure plans yet

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Google claims to have the most efficient data centers in the world, but keeps its secret sauce close to the chest, revealing only tidbits of information about its servers, like how standardizing on 12 V power supplies instead of using multiple voltages adds efficiency.

So last month when I read an EE Times article called “Server Makers Get Googled” that reported Rackable Systems Inc. would start shipping for its CloudRack enclosures 12 V-only motherboards with two servers per board in early 2009, I was quite interested. Could it be that server vendors have decrypted Google’s recipe for efficiency and will now offer it up for sale to the lilliputians of the IT world?

So, as a journalist that Rackable typically keeps in the loop, I felt a tad overlooked after reading all this buzz and emailed my Rackable contact to ask about this exciting new Google-esque product.

The folks at Freemont, Calif.-based Rackable said the reporter incorrectly inferred that Rackable was going to be delivering 12 V enclosures. “That information was taken out of context from a panel discussion, which included the mention of Google’s 12 V servers and Rackable’s CloudRack,” a Rackable spokesperson said.

Geoff Noer, Rackable’s Vice President of Product Management, called to explain today that though the company isn’t offering a 12 V rack, the company started shipping 12 V motherboard options for its servers in Q408.

And I suppose if any vendor were to give 12 V server enclosures a shot in the commercial market it would be Rackable Systems, because the company has a history of going off the beaten path when it comes to servers and server power; it introduced the first large-scale X86 DC-based servers and storage in 2003, and sells half-depth 1U, 2U and 3U servers for added density.

The benefit of using a single-voltage power supply (12 V) for hardware is added efficiency. Today’s server power supplies convert 110 V AC power to DC power at 3 V, 5 V and 12 V, which, according to Google’s website, wastes up to a third of the total energy consumed by a server before it ever reaches the computing components.

Google designs its servers to perform a single conversion to 12 V, which is then further converted to other voltages by the motherboard, reducing power loss during voltage conversions as well as heat output and power loss, according to this article about Google’s 12 V model on Nemertes Research.

Noer said that when comparing Rackable’s single 12 V motherboards to those using multiple voltages, the single output supply is 91% efficient — 3% more efficient than the multi-output option. “It doesn’t sound like a lot, but that 3% adds up to a lot of watts saved,” Noer said.

Apparently, whether a vendor uses a single output or a multiple output makes no difference to the end user because the power distribution happening within a server is not visible, Noer said.

Though Rackable does not have any plans to offer 12 V power at the rack enclosure level at this time, there is interest in it because it would add more efficiency by eliminating individual server power supplies, Noer said.

“As part of a rack-level or blade enclosure solution, 12 V-DC power distribution has appeal from the standpoint of eliminating individual power supplies, simplifying power delivery and improving power efficiency,” Noer said. “Such systems would still appear to be AC solutions from the users’ perspective, since they would be providing AC power to the rack or to the blade enclosure even though the internal DC distribution would be 12 V-DC-based.”

Noer said he’ll let us know “if/when” Rackable offers 12 V-based racks. Stay tuned.

December 17, 2008  8:33 PM

Dell shipping Egenera PAN Manager on PowerEdge servers – finally

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Dell announced today it has officially begun shipping Egenera’s Processor Area Network (PAN) Manager software on its PowerEdge servers sold in North America, nine months after originally announcing it would do so.

When Dell first reported it planned to ship Malboro, Mass.-based Egenera’s PAN Manager software, it was slated for availability by June. Dell did not respond to questions regarding the delay by the time of this posting.

But, now that it is shipping, PAN Manager software will extend beyond the hypervisor and virtualizes I/O infrastructure on Dell PowerEdge servers, including Ethernet network interface cards, Fibre Channel, host bus adapters (HBAs), and Ethernet and Fibre Channel switches. PAN Manager graphic

By virtualizing I/O, PAN Manager essentially creates an entire virtual datacenter where nothing is tied to physical hardware, applications or operating systems. This allows IT to allocate compute, storage, and network resources wherever and whenever necessary. The software also manages both physical and virtual resources under one pane of glass. A demo of how it works is available here, on Egenera’s website.

With PAN Manager software, Dell will have a strength against competitors with virtualization management tools, such as Hewlett Packard Co. and its Virtual Connect software, which pools and abstracts the local area network and storage area network (SAN) connections to servers and virtual machines (VMs) in HP’s BladeSystems.

When Dell first announced its partnership with Egenera, Ideas International analyst Jim Burton posted a blog stating, “In today’s market, Dell can compete very effectively with other vendors on simple server virtualization and SANs. But what it lacks is a management tool that can pull everything together into an entirely virtualized datacenter. That is where PAN Manager comes into play. With PAN Manager, Dell leaps over many of its competitors with the ability to create the virtualized datacenter of the future today using inexpensive industry-standard components…We at IDEAS feel the OEM relationship is a win-win for Dell and Egenera, as well as the customers of both companies”

Unlike other software that Dell resells, PAN Manager is integrated with the Dell hardware in the factory. “That means the customer doesn’t have to worry about installing software, and all the pieces work together correctly with multiple vendors. Dell also provides any professional services and offers first line support on Dell / PAN Systems,” said Christine Crandell, Senior Vice President of Marketing for Egenera.

PAN software existed only on Egenera’s BladeFrame products until November 2007, when the company opened it up to third-party hardware. Fujitsu Siemens Computers was the first official OEM, and Egenera PAN Manager is now integrate into its Primergy server line as well.


December 3, 2008  5:42 PM

Server sales suffer on economy, virtualization; vendors branching out

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

With the U.S. economy in a recession, world economies suffering and virtualization adoption on the rise, it comes as no surprise that factory revenue in the x86 worldwide server market declined 5.2% year over year to $12.6 billion in the third quarter of 2008 (3Q08), according to the IDC’s Worldwide Quarterly Server Tracker released December 3.

In fact, this is the largest quarterly revenue decline for servers since the fourth quarter of 2002, and the sluggish server unit shipment growth of 2.8% year over year in 3Q08 represented the slowest increase in server shipments since 4Q06, the IDC reported.

“The x86 server segment was definitely impacted by the economic downturn; there was significant deceleration in the quarter with a particular weakness in September,” said Jed Scaramella, IDC Senior Research Analyst, Servers. “Due to the uncertainty in the market, customers cut back on all nonessential spending.”

Volume systems revenue declined 7.2% year over year in the third quarter, the first decline for this market segment in more than 14 quarters, and revenue for mid-range enterprise servers declined 9.5% year over year. Shipment growth also slowed significantly for x86 servers to 4.0% (1.97 million units) because of a low demand, and revenue declined 6.6% year over year in 3Q08, representing the largest year-over-year decline for the segment in more than 24 quarters, IDC reported.

The IDC didn’t mention this in their release today, but it is obvious that virtualization is partly to blame the slowing demand for commodity x86 servers because it increases server utilization.  According to Tom Bittman, VP and distinguished analyst with Gartner, virtualization has penetrated 12% of the market, and the number of VMs deployed doubles every year. “By 2012, we expect more than half x86 workloads will be run on VMs,” Bittman said in an interview about his presentation on the virtualization market for Gartner’s 27th annual Data Center Conference this week.

“Virtualization and cloud computing have screwed up the market; vendors used to compete in compute islands, they were all direct competitors, but now they fight for control of an entire virtual layer. IBM and HP are competing in broad server technologies, instead of HP and IBM competing only in the area of server hardware,” Bittman said. “All vendors worry about becoming commodities and they all want to be considered the brains of the industry”

Perhaps that concern, along with slow server sales, is why vendors including HP, Dell and Sun have branched out into the area of data center services this year that could add a revenue stream beyond selling hardware. HP acquired EYP Mission Critical Facilities about a year ago and began offering data center services in March. Just this week, Dell announced it would offer services to help people extend the life of their data centers. Before that, Sun announced data center services that include data wiping.

But, there were exceptions to the grim server market numbers; revenue for high-end enterprise servers grew 4.0% year over year, the third consecutive quarter of growth for the segment. Other exceptions to the slowdown were blade servers (11% of the market) and IBM System z (9.4% of the market), which both increased this quarter, IDC reported.

Scaramella said IBM System Z sales didn’t suffer because they tend to be cyclical and are built into companies long-term budgets, which is not always the case for the smaller x86 systems. “Customer are more likely to push out [x86] purchases and see what they can do without,” he said.

And blades were the only platform to experience positive growth in the quarter, with all major vendors exhibiting double-digit growth in blade volumes, IDC reported.

I’m no analyst, but I am guessing the demand for blades didn’t slow down along with other x86 servers because today’s blades are pitched as ideal virtualization platforms. The HP ProLiant BL495c virtualization blade, for instance, is one of many new blades designed with more memory, data storage and network connections to meet the needs of memory and I/O-hungry VMs.

In addition to their appeal as a virtualization platform, blade servers are desirable because they take up very little space in cramped data centers and many blade surpass rack servers in power and efficiency.

So, it will be interesting to see whether server sales recover when the world economies improve, or if they remain depressed due to the increasing use of virtualization.

The IDC is predicting this slowdown to continue throughout most of 2009, but server sales will rebound with the economy, Scaramella said. “We are not anticipating a quick rebound [but] I do not think we will see the same extreme fall-off the market experienced after the dot.com bust,” he said. “At that time there was a tremendous amount of excess capacity built out in the infrastructure. Over the past few years, many companies have been in a consolidation mode – reducing the numbers of servers they have in operations as well as reducing the number of data centers they have in operations. Back in 2001-2002, companies were able to put off purchase due to the excess capacity, this is not the case today.”


November 25, 2008  1:50 PM

Sun Microsystems provides storage, hard drive wiping services

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Last week I chatted with Michelle Dennedy, Chief Privacy Officer for Sun Microsystems Inc., about a new data erasure service the company is offering as part of Sun’s recently announced Datacenter Services suite that could help avoid serious data loss or data breaches.

“When people move their storage and server arrays from location A to B, these systems are loaded with sensitive customer data, and if one asset falls of the truck, they would be out millions of dollars in data and in a lot of trouble,” Dennedy said. “Also, when someone comes in to repair systems they have access to all of the data on those systems,” so erasing the data is a smart move.  SuperStock image data stolen

For example, I recently read this article about a private contractor who downloaded sensitive data from a U.K. government system onto a memory stick, and then lost it. And another story, also from the U.K, reported that a computer disc containing the medical records of more than 38,000 National Health Service patients went missing when it was sent to a software company to be backed up, ironically, in case the records got lost.

While Sun didn’t divulge the specific customers or incidents that inspired this new service, I imagine they where similar to those reported above. Dennedy said Sun’s new data erasure service was created to prevent vulnerabilities when repairs are being done by a third  party or when systems are being re-deployed to a new site. “You should know that if you lose a piece of equipment, you are losing only that silicon and not the data that was on it,” Dennedy said.

Another time to erase data is when a system is decommissioned and disposed of; many companies don’t think about erasing the data before ditching the old hardware, and that data could end up in the wrong hands, according to Dennedy. “It isn’t that they don’t care, but there is some ignorance about the massive amounts of data contained by the people getting rid of the equipment,” she said.

“Our technicians will administer software based erasure service for storage and servers, and will hand over a certificate to say, this is no longer an information asset,” she said. Sun is offering the service to non-Sun servers and storage as well.

Of course, users can erase data in their own systems using hard drive erasing software (a simple Google search yields over 480,000 results) and there are hundreds, if not thousands, of other companies offering this service, along with a certification, as well.

It is unfortunate that we live in a time when there are so many criminals waiting in the wings to steal data; the market for stolen data is about $276 million, according to Symantec Corp. Knowing this, taking every precaution to secure customer data with services like those from Sun and other companies is very necessary today.


November 14, 2008  9:30 PM

HP’s energy efficiency crusade to control server, data center power

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

A few Hewlett Packard (HP) executives visited with me yesterday to discuss their Green data center mission – and surprisingly, they admit that it doesn’t always mean using HP hardware.

They started off our meeting with a discussion about new and existing server power control tools, which I’m not convinced many IT admins actually take advantage of.

HP’s new Dynamic Power Capping tool within Insight Power Manager lets IT set power caps on HP servers based on peak load trends to prevent over-provisioning of power. The cap can be set on single servers or on an entire chassis of blade servers, and can also be based on user-defined policies, according to HP’s VP of Enterprise Server and Storage Infrastructure Software Mark Linesch.

HP’s ProLiant servers shipped within the past few years already have the hardware for this feature baked into them, so ProLiant users need only do a firmware upgrade to add the Dynamic Power Capping feature, Linesch said.

“HP has invested a huge amount of money in green technology not just for the sake of being green. It has very practical implications that save companies significant amounts of money,” Linesch said.

Other companies offer power capping features on their servers as well, including IBM. IT can set a power cap for IBM servers and IBM BladeCenter systems via Active Energy Manager firmware, when the firmware supports capping.

These tools sound great, but I question whether or not power capping features are actually being used in data centers. I’d like to hear from users about this; power control features have existed on servers for many years, but does anyone use them? Is a tool like HP’s Dynamic Power Capping a viable option for virtualized servers?

HP’s execs also told me about their vendor-neutral data center efficiency consultancy services. Since HP’s acquisition of EYP Mission Critical Facilities Inc. a year ago this month, HP  has offered vendor agnostic consulting services to data centers to help them (for a fee) measure energy efficiency, without any pressure to use HP equipment, said Bill Kosik, energy and sustainability director of HP’s EYP Mission Critical Facilities group.

“That was actually part of the deal when we were acquired; we wanted to stay vendor neutral and we have been able to do that,” Kosik said.

So far, HP’s EYP group has performed consultancy work (including thermal mapping and energy analysis) for about 30 data centers that are either on the brink of a major power consumption dilemna or in a transition phase and need help planning, designing and developing their data center, Kosik said.

On November 3, HP announced new EYP services packaged in a tidy bundle, as vendors love to do, called HP Energy Efficiency Design and Analysis Services, which includes energy efficiency analysis and design services that help data centers meet compliance regulations like those from the Leadership in Energy and Environmental Design (LEED).

And of course, HP isn’t the only company offering  data center efficiency software and services to data centers these days. The list is long, which is great for consumers.




November 6, 2008  9:00 PM

Eight reasons data center managers should thank Wall Street

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Earlier this week I spoke with Rob Gardos, the CEO of the New York-based IT automation company GridApp Systems, about his paper “Eight Reasons Data Center Managers should thank Wall Street for the Financial Meltdown.”

As a reporter, I am keenly aware that when it comes to spinning crap into silk, product vendors are pros. So I was pretty skeptical when I saw the title of his paper.

So I asked Gardos to explain why in the world data center managers should thank anyone for the economic cesspool in which they now exist.

For starters, companies have had to reduce their head count because of the economy, so they have half the IT staff to do the same amount of work, he said.

And this is good?

Well, no, but data centers can’t have servers failing left and right and unorganized systems when there are fewer people to manage the issues. “This meltdown has accelerated the path to something dramatically more efficient,” Gardos said. “People are coming up with a new paradigm and are finding ways to improve their systems, because they have to. People are looking at how to minimize costs and how to cut down on tasks that don’t add value to the organization.”

Now that it is time to tighten ship, Gardos said data centers are doing things that will result in long-term benefits:

1.  Reducing costs, energy consumption and waste. Businesses have to find ways to minimize energy costs in the data center, reduce overspending on compliance efforts and automate time-consuming tasks.

2. Core data center priorities. IT professionals have seen the true centrality of product and project performance to company competitiveness. The downturn is to thank for the newfound clarity and redefined priorities.

3. Frugality. Businesses are forced to check line items and cut frivolous spending. This nuisance is a blessing in disguise and will improve spending for years to come.
4. Innovation. IT decision makers and managers have put their heads together to improve efficiency, productivity and competitiveness. This trial-by-fire brainstorming cbreathe new life into companies.

5. Cultivating talent. This includes talent. There is a surplus of once untouchable and highly qualified IT professionals swimming around. IT managers can beef up their staff for less.

6. Green IT. Ideas for operational savings have actually provoked businesses to engage in greening techniques. Many companies will emerge with lowered costs and a greener data center.

7. Competitiveness. Businesses are learning to do more with less, and those habits will continue after the crisis and improve competitiveness in times of prosperity.

8. Long-term benefits. Things are tight now but will the downturn actually spur budget increases in the post-short term for projects that have been placed on the backburner? Lessons learned may actually induce additional spending on virtualization, automation and other cost-savings initiatives.

Of course, it should be noted here that GridApp provides data center automation equipment and would probably love to see data centers using its tools, but Gardos made an effort to remain vendor neutral during our discussion.

“It is clear that infrastructure management and automation will drive efficiency forward – -things like [IT automation software company] BladeLogic make a lot of sense when there are fewer employees to do the work,” Gardos said. “Companies have to change their processes to do more with fewer people, and get more value out of the people the company has.”

He made some good points , and I wonder how many companies now lay off employees only to find themselves buying expensive software to automate the tasks their staff once performed. Seems likely that the data center automation market could ultimately benefit from these hard economic times.


November 5, 2008  5:32 PM

Third draft of Energy Star server spec is out

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The federal Environmental Protection Agency yesterday announced that it has completed the third draft of the Energy Star specification for servers. Here are the highlights:

The Tier 1 server spec, which includes one- to four-socket servers, is scheduled to take effect on Feb. 1. For servers with more than four processor sockets, called Tier 2, the benchmark won’t take effect until October of 2010. Blade servers aren’t included, and neither are DC-powered servers that don’t have a built in DC-to-DC power supply. The EPA hopes to address those server form factors in the future with an add-on to the specification. Network and storage equipment, as well as server “appliances,” are also not included.

(As an aside, the EPA’s draft mentions that the Standard Performance Evaluation Corp. (SPEC) is developing a SPECpower benchmark for blade servers, similar to the one SPEC released last year for volume x86 rack servers. Once SPEC does that, EPA will get to work on its Energy Star spec for blades)

The spec includes a matrix for power supply efficiency requirements. For example, if the server has a multi-output power supply, the supply should be at 82% efficiency when the server is at full load.

The spec also sets power consumption limits for when the server is idle. For a single-socket server, the limit is 60 watts; for 2-3 socket servers, the limit is 151-221 watts depending on how much memory is installed; and for four-socket servers, the limit is 271 watts. There are allowances made for additional installed components (such as 15 watts for another hard drive).

For the server performance compared to power, the Energy Star uses the SPECpower benchmark, which measures server-side Java performance.


October 29, 2008  9:02 PM

How Energy Star could slow down Moore’s Law

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

The EPA’s plan to slap Energy Star stickers on servers could slow down Moore’s Law, at least temporarily.

I’m talking here about the economic portion of Moore’s Law rather than the technological portion. Chips will likely continue to shrink, and more and more processing power will be put onto smaller and smaller spaces. At the system level, you will continue to see the amount of server power (and by power I mean processing capability) per 1U of rack space increase.

But the economic effect of Moore’s Law — that the processing power gets cheaper and cheaper because of the natural competition in the marketplace — could see a hit due to Energy Star. Processing will continue to get cheaper, but perhaps not at the same rate as before. Or, at least, there will be a blip following the Energy Star implementation when the economic effect of Moore’s Law slows down.

Let’s take a look. The federal Environmental Protection Agency is working on implementing an Energy Star specification on servers in the same way it has them for washing machines, ceiling fans, and personal computers. Currently the EPA Energy Star program is digesting comments from a second draft for servers that came out this summer. The Energy Star label would immediately tell users how much energy each server uses, with the hope that vendors would start competing on energy efficiency.

If you go into your local Lowe’s or Sears, you’ll see how it works. Each appliance has a yellow sticker attached to it that tells you approximately how many kilowatt-hours (kwh) the appliance consumes, and approximately how much that translates into an annual cost of operation. If you want to check out qualified refrigerators and freezers (meaning 20% more efficient than federal standards), you can just go to the Energy Star website on refrigerators and freezers, or look for a special Energy Star approved sticker on the appliance itself.

So the idea for servers is the same — check out the Energy Star website and buy accordingly. The idea is to get server vendors competing with one another on energy efficiency. Which will probably happen.

But here’s the kicker — when the Energy Star program is implemented, won’t those vendors charge a premium for those servers that qualify under Energy Star? I think so. And that’s where Moore’s Law takes a hit. The cost of server processing capability per 1U of rack space won’t increase as much as before the Energy Star program, because of that premium.

Is this all ado about nothing? Well, maybe a little. After the hit — however long it takes — the economic effect of Moore’s Law will kick back in. But what about before the hit? Well, before the hit, users out there can do the energy consumption measurements and comparisons on their own without worrying about any sort of Energy Star premium.

I thought of this while sitting in the last session of Data Center Decisions in Chicago, with Ken Brill of The Uptime Institute talking about the Energy Star specification with about a dozen users. He, along with many others, have also stressed the need for data centers to start measuring their energy consumption. If they haven’t already, users should also start looking at the energy consumption of the servers they’re buying, if that matters to them at all. The information is available. If it’s not listed in the server specs sheet online, the vendor or VAR should be able to provide it.

In other words, if you’re eagerly awaiting an Energy Star specification so you can start buying servers based on energy efficiency, just do it now on your own. It will probably cost you less.


October 28, 2008  3:56 PM

Data Center efficiency tips and tricks from Data Center Decisions

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

The overarching theme of the Data Center Decisions conference in Chicago last week was energy; how much data centers use, how much they pay for it, and how much they could be saving. 

The keynote addresses on both days of the conference, October 24 & 24, covered data center efficiency at length, with plenty of tips and resources to help data centers cut back on power consumption, though it appears that not many people are taking the necessary measures to reduce consumption. Because of this, government plans to step in and mandate power saving measures to prevent future climate change.

As awful as this sounds, government intervention is a necessary measure at this point, because facility spending has increased tremendously over the past two years with no end in sight, and with all of this additional compute capacity, the outlook for the environment is grim.

The energy required to power and cool a single server emits four tons of greenhouse gases, so by 2012, data centers worldwide will exceed greenhouse gas emissions of the airline industry, according to Ken Brill, President and Executive Director for the Uptime Institute, who gave a keynote address called “Revolutionizing Data Center Efficiency” on October 24 based on the McKinsey / Uptime Institute report.

[kml_flashembed movie="http://www.youtube.com/v/75JJ43q2RUE " width="425" height="350" wmode="transparent" /]

So, why has data center power consumption spun out of control? In addition to the increasing demands from Web 2.0, 80% of today’s compute demand is performed on distributed systems with only 5% to 20% utilization rates, whereas before 1980, mainframes were used, and at much higher utilization rates, Brill said.

The way to reverse the trend sounds easy enough; use virtualization to consolidate systems and increase server utilization rates, and also kill comatose servers.

Simple as these steps sound, it can be difficult to do when you don’t keep track of servers to know their utilization rates, Brill said. In this case, implementing a formal de-commissioning program using ITIL to document, bill back and audit the systems is a first step.  [kml_flashembed movie="http://www.youtube.com/v/RGs-2uzKi7U" width="425" height="350" wmode="transparent" /]

“If we want to become energy efficient we have to become better engineers,” Brill said.

Other measures that can make a major impact are correctly setting the cooling unit set point, shutting off humidification and de-humidification functions, implementing hot aisle/cold aisle containment, turning off unneeded cooling units, and if possible, increasing eco-friendly water side cooling, Brill said.

Data centers that are adding hardware should make an effort to buy efficient power supplies and hardware, which all the major vendors offer, and rightsize memory to avoid using excess power, Brill said

If your asking yourself who in IT has enough extra time to do all of these things, Brill had a suggestion for that, too; appoint an “Energy Czar” – someone who cares about the environment and wasting power – to make sure the data center facilities and operations are as efficient as possible.

Of course, the Energy Czar could also get a bonus here and there for lowering the company power bills, which most certainly will happen when even some of the above measures are implemented.

Companies can also use efficiency software tools or hire outside consultants to help increase energy efficiency, and there are plenty of choices today.

 

 



October 28, 2008  2:49 PM

Sun, Fujitsu push out smaller Sparc64 server

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Sun Microsystems and Fujitsu today announced the M3000, a 2U, single-chip server based on the quad-core Sparc64 VII processor.

It might seem odd to have a single-processor server that has no room to expand for more processors. Most likely this machine will cater to existing Sun users running old Solaris applications on old Sparc64 infrastructure — particularly those homegrown applications that are geared to run on single threads and just need to go, go, go. For that purpose, the M3000 is probably a good bet. It’s a smaller footprint that doesn’t run as hot as older Sparc64 servers, and it has a lot more power.

The Sparc64 processor, unlike the multithreaded UltraSparc chips, is more focused on single-thread applications such as database and batch transactions. UltraSparc-based chips are better used for Web-serving applications that serve a lot of users at the same time, but at slower speeds than a Sparc64.

Earlier this year, Sun and Fujitsu came out with their M-series line of servers based on the Sparc64 VII. Those included boxes as big as the M9000, which can have as many as 64 processors.

The M3000, meanwhile, tops out at one chip. That along with 4GB of memory will cost you about $15,000. Memory is expandable up to 32GB, and it has 4 I/O slots.

Tom Donnelly, product manager of enterprise systems at Fujitsu, and Tom Atwood, Sparc systems manager at Sun, both said the M3000 is targeted toward existing Sparc customers looking to upgrade their Solaris infrastructure, as well as toward existing HP-UX and AIX users.

“We are focused on the RISC market,” Donnelly said, “displacing IBM and HP systems whenever we can.”

Atwood added that the M3000 will take up half the space and half the power while giving twice the performance as two equivalent UltraSparc IIIi-based servers from five years ago.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: