Server Farming


March 2, 2009  4:09 PM

Tideway’s latest mapping software offers data center clarity

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

How do you negotiate the best licensing deals with companies like Oracle? How do you provision servers and storage efficiently and make changes to software without screwing up the other software configurations in the data center?

The answer to all of these questions is “information,” said Richard Muirhead, CEO of Tideway, a New York-based software company with products that identify all the software – physical and virtual – within the data center.

“You have to know what you have and what you use to negotiate well, so understanding your environment pre- and post- virtualization is necessary to negotiate licenses,” Muirhead said. “We help people cut millions of dollars in costs in their data center by helping them find out what is already out there.”

Their newest product release, Tideway Foundation 7.2, is an automated discovery and application dependency mapping software that scans the data center continuously and tells the end user what is going on under the hood. It also lets IT analyze power consumption statistics for business applications, view their carbon footprint, and keeps end-of-life, unsupported software away from production.

“People usually don’t realize there is a product that gives this type of insight, so they muddle on as they always have…we have seen millions of dollars wasted on software licenses. One customer even had a million dollar server that they weren’t aware of that wasn’t being used,” Muirhead said.

According to Muirhead, users typically take about half a day to become proficient at using the software, which is deployed as a virtual appliance using the standard OVF format. The software is currently “optimized” for VMware, and will be supported for Hyper-V and XenServer later this year, Muirhead said.

Tideway’s software costs around $8-9 per server, per month, and a free community version, which can be applied to up to 30 servers, is available for download on their website. As for ROI, people typically see a 5X return within 90 days, Muirhead said. “It is very quick to deploy and cost effective.”

If you want to do some comparison shopping, other companies that offer application interdependency mapping software include IBM, BMC, CA, and Integrien.

February 19, 2009  5:15 PM

Sun Microsystems SPARC Enterprise T5440 server rocks, user says

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Sun Microsystems, Inc.’s SPARC Enterprise T5440 server won the Bronze award in the server category in SearchDataCenter.com’s Product of the Year 2008 awards, but the system is performing at the platinum level for René Wienholtz, a CTO / CIO for STRATO, the second largest web hosting provider in Europe.

The T5440 was released in September 2008 and is the result of the joint development efforts of Sun and Fujitsu Computer Systems. It is an upgraded version of the T5220 system.

Wienholtz recently emailed me about STRATO’s experience using T-series servers for the company’s internet service farms. Their data center is full of T5220 and T2000 servers and some of the latest T5440 systems, which the company is quickly adding more of because it supports their Web 2.0 applications so well, Wienholtz said.

The T-series systems support STRATO’s Web Services (https), Mail Services (SMTP-In/-Out, IMAP, POP3, AntiSpam/-Virus Filters), Shop and Database farms and makes up about 80% of all the company’s infrastructure today, Wienholtz said.

The T5440 is used mainly in STRATO’s Web & Mail farms, as that is where the highest load profile and the most parallel requests per second are, Weinholtz said.

“What we like most about the T-Series, and the T5440 especially, is their energy efficiency. The CoolThreads architecture is literally designed especially to our needs. We run multi-threaded internet applications that don’t need much calculating power, but the amount of parallel requests per second is absolutely massive – billions of hits per hour or a billion mails per day have to be handled,” Weinholtz said. Because of this, “the
CPU speed itself is not king – it’s the high amount of thread units that helps these applications to perform.”

Another great feature of this architecture is that upgrading the number of “T”-CPUs in a system equal a nearly linear upgrade in performance, so two T5220’s are equivalent to one T5440, Weinholtz said.

While one T5440 performs just as well as two T5220’s, the T5440 uses less power than its predecessors; “it has more efficient power supplies and [other power efficiency technologies] built in. This helps us saving money in OPEX (operating expenses) and CAPEX (capital expenditures), as a single T5440 is a little cheaper than two T5220,” Weinholtz said.

Pricing for the T5400 starts at about $45,000 and maxes out at around $200,000, according to Sun.

Pretty hot review of the T5440, to put it mildly. If anyone has a love story about a server you can’t live without, please share. By the same token, if you are dealing with a system you can’t stand, I’d love to hear your horror stories.


February 19, 2009  3:50 PM

TPC describes upcoming server power efficiency benchmark

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

The Transaction Processing Performance Council (TPC) is preparing to release a new benchmark to measure server energy consumption, and detailed the benchmark in a video this week.

The TPC’s four active benchmarks are TPC-C and TPC-E for online transaction processing, TPC-H for decision support for ad hoc queries and TPC-App for business-to-business transactional Web services.

In November, the TPC announced plans to add an energy efficiency spec to its repretoir because power consumption is one of the top concerns of data center managers today, whereas performance and price were of utmost importance in prior years, according to Mike Nikolaiev, chair of TPC Energy who explains the new TPC spec in a YouTube video.

This new metric will give IT a way to measure price, performance and energy efficiency of a system. “It is very easy to have high performance and high energy, it is very difficult to sustain high performance and reduce energy. That is the call of the IT community today,” said Nikolaiev.

The new energy metric is performed while running other standard TPC benchmarks. It measures power consumption of the system at full load, and also measures power while the system is idle, Nikolaiev said in the video.

“With those parameters, IT managers wil be able to make more intelligent decisions and better utilize the energy in their data centers,” Nikolaiev said.

Similarly, the Warrenton, Va.-based Standard Performance Evaluation Corp. published the SPECpower-ssj2008 benchmark in December 2007 to compare server power consumption with performance.


February 18, 2009  3:52 PM

Intel: Mega data centers sucking up chips

Mark Fontecchio Mark Fontecchio Profile: Mark Fontecchio

Jason Waxman, Intel’s general manager of high-density servers, told The Register that in the next few years, about one-quarter of all chip sales will go to so-called “mega data centers.”

Currently, that number is at about 10%, but Waxman predicts it growing as “the world continues to embrace distributed grid cloud architectures from the net’s biggest names,” according to the story.

It’s important to note what Waxman considers a “mega data center:” Google, Amazon, Microsoft, “but also telcos doing hosting like AT&T and Verizon.” He further defined it as companies purchasing thousands of machines a month and putting them into megawatt data centers.

As the story points out, this growth only occurs if cloud computing takes hold and grows. Currently Google and Microsoft have put the brakes on data center construction, for data centers largely thought to be cloud computing-based.


February 9, 2009  7:59 PM

HP to add SSD memory option to servers

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Frank Baetke of Hewlett Packard’s Scalable Computing & Infrastructure (SCI) organization just gave me an update of what HP is doing to add power efficiencies to its highest performing servers, and one answer is the addition of Solid State Disks (SSD).

Though HP has not made any official announcements regarding the addition of SSD, and Baetkey could not give any details about the release date or which servers will have them, he said SSD is a greener alternative to spinning hard disks because SSD’s have no moving parts that consume power.

Instead of having spinning parts like hard disks, SSD’s are based on flash memory, can be up to hundreds of times faster than hard drives and use less power than traditional hard drives.

Intel Corp., Samsung, sTec Inc., Violin Memory and Texas Memory System are all offering flash SSD products today. Around October of last year, Texas Memory introduced a 20 TB flash SSD module that delivers one million inputs/outputs per second (IOPS), the RamSan-5000, which is essentially an array of flash solid state disks designed for memory intensive workloads and is “designed from the chip level up for better reliability and performance than the types of flash used in low end markets,” according to executive vice president of Texas Memory, Woody Hutsell.

Up until the last year, Texas Memory only produced RAM based SSD because flash based SSD was too expensive to be viable on the market, Hutsell said. “But cost of the media has gone down, and the density has gone up, driven by the consumer electronics industry, so flash has become more competitive with SSD storage arrays.”

Some companies have already begun replacing their hard disks with SSDs to improve the speed of their servers, according to Jim Handy of Objective Analysis, but this is a pretty narrow slice of the market. Uptake is expected to grow; the IDC predicts SSD uptake in enterprise computing will pick up by 2010 and enterprise computing applications will grow from 12% of SSD revenue in 2007 to more than 50% by 2011, but others, including storage administrators, think mainstream adoption of SSD in enterprise data centers will take much longer.

In general though, flash SSDs are a good alternative to hard disk arrays in data centers that use 10,000-100,000 hard disks today, Handy said. “In such a system you might find 1-2% of the hard disks being replaced by SSDs in a ratio of one SSD for every ten hard disks or so.”

Still, if SSDs can bring better performance, lower power consumption, and a smaller footprint for a competitive price today, we will no doubt see more and more server vendors adding SSD options to their x86 boxes.


February 2, 2009  5:52 PM

NFL’s biggest game supported by IBM’s smallest system

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

If you watched the Steelers win their sixth Super Bowl last night, everything you heard from the press and all the stats from NFL.com came from IBM’s BladeCenter S system.

Yup, all the operations for the largest sporting event in America ran on IBM’s smallest systems, the BladeCenter S, which is similar in size to a briefcase.

In addition to the big game, next month, IBM will officially announce that every one of the NFL’s 32 teams will be standardizing on BladeCenter S, according to Alex Yost, VP of IBM BladeCenters.

The BladeCenter S is actually designed for small to medium sized businesses that don’t have their own data center and need a compact, all-in-one piece of equipment, according to IBM. It is esentially a data center in a box that contains up to six IBM BladeCenter servers, 9 terabytes of local shared storage and networking components. Everything in the BladeCenter chassis is redundant – power, switching, cooling, and storage, so there is no worry about failures, either, Yost said.

As it turns out, this little data center box on wheels has made life for the NFL’s IT team a lot easier; in the past, for every Super Bowl, the NFL’s IT staff have had to lugg all the necessary servers, storage and networking to the event site and set up an entire data center within just a couple of weeks, said Jonathan Kelley director of infrastructure computing for the NFL.

As a long time IBM BladeCenter H customer that trusts IBM equipment, the NFL contacted IBM last year for some help setting up for the 2008 SuperBowl in Arizona - which this New England fan dares not discuss – and heard about the BladeCenter S.

“When the NFL came to IBM to help them set up multiple data centers for last years Super Bowl, our IBM BladeCenter S was still about three weeks away from deployment, but the NFL was confident enough in IBM to use a brand new type of server, and it went off without a hitch,” Yost said.

To support the operations of Super Bowl 44 last night, the NFL used four BladeCenter S chassis with eight, quad-core processor powered blade servers in them and about eight virtual machines running on each server to support security and credentialing for 60,000 temporary employees and around 11,000 media personell.

They also deployed about 300 PCs, wireless networks, and other necessary computing functions using IBM’s BladeCenter blades, said Joe Manto VP of Information Technology for the NFL.

“The operations at the NFL are all supported IBM blades. We chose them because their technology has proven itself,” Manto said. “These servers are almost over-engineered for what we do with them, and they are reliable.”

(The NFL wouldn’t disclose which CPU vendor they use, or name a specific virtualization vendor; they said they use a mix of virtualization vendors, but IBM reported the NFL uses VMware Inc.)

The BladeCenter S enclosure also has extra space for UPSs or other components that might need to be added, and plugs into a regular wall outlet and Ethernet connection.

“It is great for events because it is portable and can be configured at a partner site then shipped to the right location,” Yost said. “Also, their own storage can be connected to the Bladecenter S. It is super quiet and could be placed in office environments without worrying about the noise, and both the front and the back doors of the chassis lock.”

A tyical deployment is in the $15,000 range, the chassis itself is a few thousands dollars. Cost depends on the number of servers in it and other configurations, Yost said.


January 29, 2009  4:31 PM

Gartner warns users of multi-core processing hazards

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Gartner, Inc. reported this week that data centers are being attacked by processing cores at a rate their software, operating Systems and applications can’t handle.

“The relentless doubling of processors per microprocessor chip will drive the total processor counts of upcoming server generations to peaks well above the levels for which key software have been engineered,” Gartner reported. “Operating systems, middleware, virtualization tools and applications will all be affected, leaving organizations facing difficult decisions, hurried migrations to new versions and performance challenges as a consequence of this evolution.”

Wow. Sounds serious, huh? Maybe I am simplifying things a bit here, but doesn’t it make sense to upgrade to quad-core chips only if you have applications that can benefit from those chips? Otherwise, why spend the money?

I suppose I am being naive. Perhaps CPU cores are like crack, and once you get a taste of the power in a dual core chip, you want four cores, and then six, and will keep adding more and more cores until your systems are balls to the wall and your software implodes. It’s a vicious cycle, man.

The Birds

In all seriousness though, people should be aware that throwing cores at applications does not automatically equal better performance; it’s been reported time and time again on SearchDataCenter.com since 2007 that not all your apps can use mutilple cores, because they aren’t written for paralellism.

According to Gartner, “the impact [of putting apps that aren’t written for parellelism on multi-core chips] is akin to putting a Ferrari engine in a go-cart; the power may be there, but design mismatches severely limit the ability to exploit it.”

In fact, software developers are doing their best to design products that can take advantage of multiple cores, but find it hard to keep up with the tick-tock advancement model of Intel Corp. and AMD.

Many apps are designed to run on just one core, and work just fine in that one core. In this case, the software doesn’t know what to do with more than one core, and will actually run slower on multi-core chips. Of course, the processor makers don’t advertise this point.

“It’s important to understand that if the software developer doesn’t do something, the majority of software applications will run on a single core. The application will not leverage the multiple cores available and, in fact, the application may even get slower,” said Ray DePaul, president and CEO of RapidMind Inc., in Waterloo, Ont. “There is talk about 80-core processors (from Intel) now and this is scary to software developers. They can’t wrap their head around how that is going to work.”

Meanwhile, organizations get double the number of processors in each chip generation, approximately every two years, according to Gartner. Each generation of microprocessor, with its doubling of processor counts through some combination of more cores and more threads per core, turns the same number of sockets into twice as many processors. “In this way a 32-socket, high-end server with eight core chips in the sockets would deliver 256 processors in 2009. In two years, with 16 processors per socket appearing on the market, the machine swells to 512 processors in total. Four years from now, with 32 processors per socket shipping, that machine would host 1,024 processors,” Gartner reported.

There are apps inherently designed to use multiple cores, like heavy workloads used in virtualization, Java, expansive databases and complex enterprise resource planning (ERP) applications. Apps like these use more than one core and perform up to 50% better on multi-core chips, according to analysts.

So, heed Gartner’s warning and don’t go core-crazy; do your research and make sure the apps you run on multi-core chips before you take money from your tight IT budgets to buy them.

.


January 27, 2009  4:48 PM

Intel lowers Xeon prices on declining CPU sales

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Intel Corp. dropped prices up to 40% on some of their Xeon processors this week following the release of some ugly financials for the fourth quarter of 2008 – so if you are in the market for an upgrade (and have any money in your IT budget) now’s the time.

The new prices on server CPUs include a drop on Intel’s quad-core Xeon X3370 from $530 to $316 (40% drop) and the 45nm quad-core X3360 from $316 to $266 (16% price cut).

According to Intel spokesperson Nick Knupffer, the company regularly makes price cuts throughout the year and that is what the recent price change reflects.

But, one would presume that these cuts aren’t just a routine act; Intel’s Q408 profit plummeted 90% and the company predicts even weaker conditions ahead. Intel, which twice lowered its fourth-quarter sales forecast, reported quarterly sales fell 23% from a year earlier to $8.23 billion, the Wall Street Journal reported.

Of course, Intel isn’t alone in the gutter; AMD’s CPU sales also slummped in 2008 and it expects sales to decline further through the first quarter of this year due to poor economic conditions.

Despite AMD’s reported a $1.42 billion fourth-quarter loss, a spokesperson claims there aren’t any price cuts on the docket for its server CPUs. “AMD moves prices on their products, and in this case speaking for Opteron, as required by the market. There is no regular price move schedule in place.”

.


January 26, 2009  6:31 PM

AMD ships more efficient, faster versions of 45nm Opteron CPU

Bridget Botelho Bridget Botelho Profile: Bridget Botelho

Advanced Micro Devices (AMD) is now shipping seven new versions of its latest Opteron processor, the 45 nanometer quad-core chip, code-named Shanghai; five are high efficiency (HE) and two are designed for higher performance than the standard version of Shanghai.

The new versions of Shanghai are essentially identical to the original flavor, only the HE is more power efficient and the SE offers better performance than the standard versions.

The 45nm Quad-Core AMD Opteron HE processor is 55-watts, compared to the standard 75-watts, and speeds range from 2.1 to 2.3 GHz. A server with an HE version can save 20% more than similarly configured systems during idle times, AMD reported.

[kml_flashembed movie="http://www.youtube.com/v/lemZfw0Dl78" width="425" height="350" wmode="transparent" /]

The new HE processors are available in eight server systems from HP, Rackable Systems, Dell, Sun Microsystems, and other vendors are expected to start shipping the CPUs by mid-year.

Additionally, two new 45nm Quad-Core AMD Opteron SE processors (2.8 GHz) are designed for performance-intensive workloads; this compares to the standard Shanghai chip speed of 2.7 GHz. The SE chips aren’t conservative on power though; they come in a 105-watt ACP thermal envelope and are aimed at data centers where performance trumps power efficiency, said John Fruehe, AMD’s director of business development for server and workstations.

“Depending on the application, the SE version offers up to 5% better performance [than the standard version], but it also uses more power,” Fruehe said. “The customers that use these chips are less interested in the power efficiency and more interested in the performance, so we don’t do power testing on these.”

The new SE processors are also immediately available in three new systems from HP and other AMD technology partners.

The AMD Opteron pricing model for HE versions range from $316 to  $1,514, and the two SE models are $1,165 and $2,649.



January 20, 2009  6:24 AM

Will Facebook-style features increase value and accuracy in CMDBs?

Matt Stansberry Matt Stansberry Profile: Matt Stansberry

The value of a configuration management database (CMDB) is proportionally dependent on the level of involvement from the IT staff using the tool. Two of the biggest challenges of a successful CMDB implementation are propagating the configuration items and keeping the thing up to date.

If data center managers could get employees to spend as much time updating server configurations as they do updating their Facebook status, the accuracy and immediacy of the tool would be a huge boon.

This is the concept behind Novell’s myCMDB, a software layer interfacing between the CMDB and the users that purports to bring social networking aspects to CMDB data use. Today Novell announced the rebranded myCMDB, from Novell’s acquisition of Managed Objects in October 2008.

The tool is designed for companies that have homegrown CMDBs, built by users on MySQL or Sybase, but will also work on CMDB offerings from HP, BMC and IBM. According to Peter O’Neill, Research Vice President at Forrester, around half of the existing CMDBs in production are homegrown, and “limited in their reporting and visibility outside of the team that created it.”

Web 2.0, you know: Wikipedia, Facebook, del.icio.us and us!
During my conversation with Siki Giunta, former CEO of managed objects and Richard Whitehead, director of marketing for data center solutions with Novell, they spent a lot of our briefing comparing the product to Facebook and Wikipedia.

“Incorporating Web 2.0 in the myCMDB design allows a CMDB to propagate faster, drives more adoption and improves the quality of the data,” Giunta said. “Wikipedia is a huge database, contributed by the end user, federated by news sources. A CMDB is created by people, federated by HelpDesk. Why do people go to Wikipedia? They feel that they can contribute.”

Giunta said myCMDB uses inboxes, RSS feeds, and the atmosphere and look and feel of Facebook. It also features “Google-like” search, and for social bookmarking, myCMDB took a page from del.icio.us. “When you’re navigating this data, it’s easy to lose your place.”

Are these just marketing buzzwords, or are there real “Web 2.0” attributes to this product?

“Sure, there are functional comparisons to be made,” said Michael Coté, an analyst with Redmonk. “The emphasis on including people’s profiles and activity streams is the most relevant. They’re also trying to pull the community and sharing aspects you’d expect to see in consumer apps. These collaborative IT management features, like being able to share different reports or views in myCMDB, are pulled from the Web 2.0 world.”

O’Neill agreed. “The release does provide CMDB insight and reporting in a very modern mode (“Web 2.0” being the metaphor for that) – much more than any other provider,” he said. “This new style is being increasingly adopted, and preferred in businesses. One of the reasons for the adoption of software-as-a-service solutions is their modern user interface.”

But the question remains, is an updated Facebook-like user interface (UI) enough to encourage employees to spend more time on the CMDB, thereby utilizing it more and also keeping it up to date and useful?

“Definitely. The IT management space has a chronic case of terrible UI syndrome. I often consider a fresh, well done UI that matches current trends in UI and usability a self standing feature on its own,” Coté said. “While myCMDB has a nice looking UI, the thing that will make the difference with it is getting users to interact with the system and build up the ‘content’ in it.”

Giunta said this may be the only way to get the next generation of IT administrators to interact with systems management tools in the future. “In the IT operations side, if you keep maintaining old consoles, all the kids will go to work on the application side of the house and we’ll end up with only old people in the data centers,” she said.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: