Server Farming

ACRHIVED. Please visit our new blog at: http://itknowledgeexchange.techtarget.com/data-center/


August 7, 2008  2:53 AM

Data center efficiency advice dispensed at LinuxWorld/Next Generation Data Center



Posted by: Bridget Botelho
cloud computing, data center efficiency, DataCenter, Green computing, IBM, Intel, LinuxWorld

The fun at LinuxWorld/Next Generation Data Center in San Francisco just does not stop. Computer part art

Today I attended the keynote address by Oracle Corp. CIO Mark Sunday and heard some pretty cool details about Oracle’s new mega data center, which the company is breaking ground on this month.

I also attended packed sessions on virtualization and cloud computing and ended the day at a panel discussion about creating a more efficient (i.e., green data center).

The panel discussion included major-industry players including Jack Pouchet, Director Energy Initiatives, Emerson Network Power; Michael Patterson, the senior thermal architect at Intel; John Pflueger, a technology strategist at Dell Inc.; Christian Belady, PE, the principal power and cooling architect at Microsoft; and Joe Prisco, a senior systems and technology group engineer at IBM Corp.

PanelThe panelists spent much of the hour-and-15 minute discussion arguing about the best technologies and methods for greening a data center, and I’m sure the 50 or so attendees extracted some useful information from the panelists’ back-and-forths.

Each panelist also offered up a tip on how to easily increase the efficiency in a data center, including the following:

  • Belady suggested IT managers incentivize employees to measure server power efficiency and reward those who come up with ways to add efficiency in the data center; without incentives, greening data centers probably won’t get done.
    As a side note, Belady made an interesting comment (or threat) about the heat threshold of hardware; he said Microsoft has pushed hardware vendors to create equipment that can withstand up to 95 degree temperatures. “If they don’t, Microsoft won’t buy that vendor’s equipment at scale,” he said.
  • Patterson suggested IT administrators raise their hot aisle temperatures to about 80 degrees Fahrenheit to reduce cooling costs, so long as the equipment can take that temperature.
  • Prisco suggested that IT administrators check hot aisle temperatures using nothing more than their good old central nervous system.”Your hot aisle is supposed to be hot, and you can tell when heat is escaping into the cold aisle without instruments. Just feel it with your hands,” Prisco said.

    Then, of course, do something to contain the heat better.

  • Pouchet suggested measuring data center efficiency with some sort of data center efficiency tool, like the U.S. Department of Energy’s tool, DC Pro, or by hiring a company to do a power efficiency assessment that will lead to better efficiency.
  • There is more to be said about creating a power efficiency data center, which will be continued on SearchDataCenter.com.

    August 6, 2008  12:21 AM

    Next Generation Data Center/LinuxWorld 2008: Reporter’s Notebook



    Posted by: Bridget Botelho
    Blade servers, cloud computing, Container Data Center, data center consolidation, DataCenter, I/O virtualization, LinuxWorld, network virtualization, server consolidation, server virtualization, virtual machines, Virtualization, Xeon processor

    I expected this year’s joint LinuxWorld/Next Generation Data Center conference at the Moscone Center in San Francisco Aug. 4-7 to be full of technology vendors, high-level technical sessions, product news and interesting charactersDice.

    As you can see (at right), my expectations were exceeded.

    This year’s conference is packed, with three to four keynotes each day, a large array of tech vendors and numerous technical sessions, covering storage, security, networking, applications, facility infrastructure, and virtualization.

    In the five sessions I attended today, which touched on all of the above, virtualization was a predominant topic of conversation in each.

    For instance, Rajiv Rajiv Ramaswami, the vice president and general manager of Cisco SystemsRamaswami, the vice president and general manager of Cisco Systems Inc., (at left), gave a keynote this afternoon, “Data Center 3.0: How the Network Is Transforming the Data Center,” and explained that, eventually, everything in the data center will be virtualized, including networks.

    In another session I attended on creating an efficient, profitable data center, hosted by the Rocky Mountain Institute, virtualization was listed again and again as a key way to reduce data center power consumption.

    Cloud computing (aka distributed computing), which goes hand in hand with virtualization, was also a popular topic in the sessions I attended, including the kickoff keynote, “Stateless Computing: Scaling at Zero Marginal Cost above Capex,” by Jeffrey Birnbaum, the managing director and chief technology architect for Merrill Lynch.Rackable ICE Cube

    In between sessions, I took a tour of Rackable Systems’ 40-foot containerized data center (at right), Ice Cube, which was one of the most popular attractions on the large show floor.

    Ice Cube is packed with up to 22,400 Intel Xeon processing cores in Rackable’s own half-depth servers, has a 36-inch central isle to access servers and uses direct current, or DC, power and self-contained uninterruptible power supply, or UPS, technology.

    Ice Cube can be configured with IBM BladeCenter servers as well.

    Tomorrow I’ll check out a keynote session by Oracle CIO and Senior Vice President Mark Sunday on delivering business value with next-generation data centers and more sessions on green strategies for data centers, cloud computing and virtualization.


    August 5, 2008  3:35 PM

    Facebook relying on Intel Xeon processors in Data Center build-out



    Posted by: Bridget Botelho
    AMD, DataCenter, Facebook, Intel, open source, Xeon processor

    The social networking website Facebook is building out its data center infrastructure using Intel Corp. processor-based systems and plans to deploy thousands of Intel Xeon processor-based servers over the next year to help accommodate its rapid growth, the two companies announced last week.

    Intel will also collaborate with Facebook to determine the best configurations for its server and software using Intel processors, taking into account energy efficiency and performance.

    Over the past several months, Facebook tested and benchmarked a number of server platforms and scenario, and ultimately selected the Intel Xeon 5400 series quad-core processors for its round of new deployments that begin in July.

    When Facebook was contacted for more information on the systems and processors it has tested, why it chose Intel over AMD and other questions about data center infrastructure plans, it refused comment.

    That said, Intel’s press statement had the following quote from Jonathan Heiliger, vice president of technical operations at Facebook; “Intel has demonstrated that the performance of their systems can help Facebook scale our infrastructure and continue to deliver the best experience to users around the world.”

    “When you are responsible for providing a fast, high-quality experience to more than 90 million people worldwide, every ounce of efficiency matters,” Heiliger said in the statement.

    Also, since Facebook’s applications are mostly built on open source technologies, the companies stated that some of the insights from this collaboration may be contributed back to the open source community, benefiting other companies that use open source technologies.


    August 4, 2008  1:14 PM

    Next Generation Data Center and LinuxWorld conferences



    Posted by: Bridget Botelho
    cloud computing, DataCenter, LinuxWorld, Networking, Virtualization

    The second annual Next Generation Data Center (NGDC)/ LinuxWorld Conference &Expo takes place Aug. 4-7 at Moscone Center North in San Francisco. SearchDataCenter.com and SearchEnterpriseLinux.com will be there to bring you news and information from some of the many technical sessions, keynotes and the show floor.

    A few of the data center technical sessions we plan to cover include “Systems Thinking for a Radically Efficient and Profitable Data Center” by the Rocky Mountain Institute; and “Cloud Computing and the Data Center of the Future” by
    Sam Charrington, the VP of product management and marketing at Appistry Inc.; and “Containers, Virtualization and Live Migration” given by Kir Kolyshkin, the project manager of the OpenVZ Project.

    Some keynote addresses we plan to cover include “Stateless Computing – Scaling at Zero Marginal Cost above Capex” by Jeffrey Birnbaum, the managing director and chief technology architect at Merrill Lynch; “Data Center 3.0: How the Network is Transforming the Data Center” by Cisco Systems Inc. vice president and general manager Rajiv Ramaswami; and “Data Center of the Future: How the Delivery of Technology Will Change” by Citrix Systems Inc. CTO Simon Crosby.

    Be sure to check in with us this week for these items and more.


    July 28, 2008  5:27 PM

    IBM BladeCenter servers now shipping in ICE Cube



    Posted by: Bridget Botelho
    Blade servers, Capacity Planning, Container Data Center, IBM, IBM BladeCenter, x86 server

    Rackable Systems, Inc. entered into an agreement with IBM to offer IBM’s BladeCenter servers inside its ICE Cube modular data centers.

    As part of this agreement, IBM BladeCenter will be the only blade server platform available for custom ICE Cubes. Prior to this agreement, Rackable’s containerized data centers only supported Rackable’s own server hardware.

    IBM also started offering its own containerized data centers recently, as did Hewlett Packard with its version, called POD. Unlike most containerized data center offerings, HP is letting customers fill the POD with servers from any vendor – IBM, Dell, Sun Microsystems, or otherwise.

    There are some other vendor neutral containerized data centers, like American Power Conversion (APC)’s InfraStruXure Express and Verari Systems Inc.’s Forest, though Verari does push customers to use its proprietary blade servers, a spokesperson said.

    Effective today, Rackable’s ICE Cube modular data center will be outfitted with IBM BladeCenter T or HT systems, which are NEBS-3/ETSI-compliant, meaning they’re certified for use in telecommunications environments and carrier facilities.

    The ICE Cube is available in 20 or 40 foot container sizes. BladeCenter-specific configurations of ICE Cube can reach densities up to 1,344 dual socket, Quad core Intel Xeon blades, or 672 quad socket, dual core AMD Opteron blades.


    July 25, 2008  10:41 PM

    Open source gaining ground in the enterprise



    Posted by: Leah Rosin
    Operating systems

    The open source community gathered this week in Portland, Or., at the 10th annual O’Reilly OSCON. The conference was host to a variety of scintillating speakers that were perfect for the admittedly “geeky” audience. While I was there, everyone was buzzing about Damian Conway’s keynote which was aptly titled: Temporally Quaquaversal Virtual Nanomachine Programming In Multiple Topologically Connected Quantum-Relativistic Parallel Timespaces…Made Easy! (I just cut-and-pasted that to maintain accuracy). Wednesday morning’s keynotes included a live Tim O’Reilly interview with the developers of MySQL, Michael Widenius and Brian Akers, and a provocative discussion of physical security and what the open source community can do to help. If anyone could be inspired to think about the big problems, the environment at this conference was about as conducive as you can get.

    The conference was celebrating its 10th year, and it seemed appropriate to take a look at how well open source is doing in the larger world, especially in enterprise environments. So, on Wednesday, O’Reilly Radar released a new report, Open Source in the Enterprise. The report shows that open source is growing, and found that there were six key drivers. Watch the video below to hear the report’s author, Bernard Golden, CEO of Navica explain these drivers. Golden also explains an uptick in open source recruiting at non-IT companies — a trend that is promising for all the open source programmers out there in their quest for getting some remuneration for their programming work.

    [kml_flashembed movie="http://www.youtube.com/v/eUZT-vLBCWs" width="425" height="350" wmode="transparent" /]

    In a time of decreasing budgets, it is no surprise that companies are looking to open source solutions instead of costly licensing fees for a variety of their computing needs. SourceForge announced their 2008 open source award winners on Thursday at OSCON, and the OpenOffice won best project, best project for the enterprise, and best project for education. But when will the enterprise embrace it — or more accurately, when will cost savings outweigh perceived risk? We know what the drivers are, but what do you think is stopping the open source march into the enterprise?


    July 25, 2008  2:20 PM

    Zimory testing data center resource trading marketplace



    Posted by: Bridget Botelho
    Capacity Planning, cloud computing, DataCenter, IT Asset management, Networking, virtual machines, x86 server

    Zimory, a spin-off of Deutsche Telekom Laboratories, the research and development unit of Deutsche Telekom AG in Berlin, Germany, is testing a global trading platform to exchange data center resources on-demand via the Internet.

    The Zimory Marketplace is basically a data center resource trading platform where users can buy and sell server resources and Virtual Machines (VMs). The company claims to be the first to introduce and operate an international trading platform to exchange data center resources.

    The marketplace sounds like a great idea for data centers that experience workload surges  and need extra capacity on-demand, and data centers with underutilized servers can sell or rent their extra capacity to re-coup some power costs.

    Zimory software

    The Zimory software stack has three levels of operation:

    • Zimory Host is the basic entity of a Zimory infrastructure. It is installed on each server which then becomes a part of a Zimory network of computing resources.
    • Zimory Manager allows the user to oversee and manage an unlimited number of physical and virtual servers with Zimory Host installed and is available in a Zimory network. Zimory Manager ships with a web-based Graphical User Interface (GUI).
    • Zimory Marketplace is the hub of the Zimory network and collects information about all available server resources and their status.

    Servers having Zimory Host or Zimory Manager installed reside behind the Firewall within the Demilitarized Zone (DMZ) of a data center, while Zimory Marketplace is located outside of the DMZ. All three of them interact with each other via standard HTTP.

    Installing and using ZimoryZimory marketplace

    To create a network of Zimory-enabled servers, the data center administrator downloads and burns the freely available Zimory Live CD image to boot each of the servers which is supposed to advertise their resources within the Zimory infrastructure. The administrator could also use a network bootable live image (also freely available) from Zimory. Both approaches will automatically turn a virtual server into a Zimory-enabled server. The installation process is almost identical for Zimory Manager.

    Inside of Zimory Manager, the administrator can configure the available data center resources for direct online outsourcing and trading on Zimory Marketplace, and define limites to the available resources, as well as a pricing scheme (flat fee or pay-per-use).

    For instance, the administrator can specify a particular group of Zimory-enabled servers or just parts of such a server for sale to third parties on Zimory Marketplace. Another option would be to rent remaining resources of a server with, say, less than 10% utilization.

    Zimory in action

    Before a workload peak occurs, systems in Zimory are running fine and the additional systems for load balancing are stored as VMs in Zimory Manager.

    When an expected or unexpected load peak occurs, the IT administrator clicks on to Zimory Marketplace through Zimory Manager and searches for appropriate server resources. After finding those resources, she starts the VMs for load-balancing from within Zimory Manager.

    The software applications contained in the newly deployed VMs will connect to the load-balancer of the core systems and start to take over parts of the workload. After the peak is finished the system will shut down the VMs automatically.

    Of course, this can also be automated. An administrator can pre-define the thresholds for when the load is to be taken over by servers on the Zimory Marketplace.

    The company plans to start beta testing soon and invites interested sellers, and purchasers, of virtualised capacity to register their interest on the Zimory website.


    July 24, 2008  9:09 PM

    Symantec data center guru talks about riding herd on IT assets and the challenge of chargeback



    Posted by: Matt Stansberry
    Capacity Planning, DataCenter, IT Asset management

    I recently spoke with Kenneth Gonzalez, leader of Symantec’s Data Center Transformation Services team about how data center managers can get rogue business units to give up their crappy old servers and how to make data center costs explicit to internal end users. This is an excerpt of that conversation.

    In a recent data center paper you produced, you talk about decommissioning legacy servers to optimize data centers. So how do you get rid of them? Experts estimate that up to 30% of the servers in a given data center aren’t doing any work. It seems like business units like to hang onto these things, putting them under desks, in the Test-Dev lab., or in closets.

    Ken Gonzalez: Asset management is a huge challenge for IT. It ends up being a real manual exercise because organizations grow by accretion. Going back to find what you have is so overwhelming, very few organizations ever start. The scope is too huge. It requires rigor and operating practices a lot of organizations aren’t willing to take on with a vengeance.

    This is an asset management problem. IT needs to know where the assets are going when they unplug them. Are they being sent to an organization to responsibly dispose of the equipment? IT managers that don’t track this could be cutting their own throats. You should be able to bring in a more space saving, energy efficient asset in its place. The IT team needs to be responsible for having positive control over the assets under its charge.

    In order to get people to change behavior, it often boils down to money, showing users how much it costs to deliver an IT service. Some folks, like The Uptime Institute and Vernon Turner at IDC have recommended chargeback. Does that work in these situations?

    Gonzalez: Chargeback is one model, but a lot of organizations are against trying it. Many organizations don’t know how to price it. Applications aren’t one size fits all. Some applications don’t do a whole lot, but use a lot of resources. Organizations are reticent about coming up with a cost model.

    The notion of a service catalogue is pretty popular — to be able to charge what it costs to deliver a service. The intent of the service catalogue would be to clearly communicate to your customer, what services you can provide and the most effective way for you to deliver them. You produce a standard profile of the services you offer. If there is something a customer needs that doesn’t fit the standard catalogue, you have to go through someone to see what resources the project will take. You expose the detail to the customer and there is a forecasting and capacity planning benefit that comes with that approach.

    If the demand for power and computing resources continues to outstrip IT’s ability to provide capacity in a cost effective way, are companies going to turn to cloud computing and other outsourced options?

    Gonzalez: I think that is an important component that private organizations are going to have to confront at some point. Some services are going to have to move into the cloud, either software as a service (SaaS) or infrastructure as a service. Whether or not a company moves an application into the cloud will primarily revolve around the criticality of the services and the security of the data, how long it would take to recover it if something happened. The issues that need to be worked through now are business issues. Dealing with the technical issues is putting the cart before the horse.

    Right now we’re just getting an initial level of awareness. You could call it “Utility Computing Part 2.”

    Do you have a data center question or comment for Ken? Leave a comment.


    July 24, 2008  7:18 PM

    Add PCI Express I/O connectivity without adding PCI Express



    Posted by: Bridget Botelho
    DataCenter, Dell, ExpressConnect, FlexAddress, HP, HP Virtual Connect, I/O virtualization, network virtualization, PCI Express, server virtualization

    Recently, Tucson, Ariz.-based NextIO announced its ExpressConnect I/O virtualization product, which adds additional PCI Express (PCIe) I/O connectivity to any server in a data center.

    “PCI Express is cost-effective, has a lot of bandwidth and a wide range of standard based I/O devices are available on PCIe, but usually there is only one [PCIe] device per server,” said Chris Pettey, the CTO and co-founder of NextIO. “With [ExpressConnect], you can have many PCIe devices for many servers.”

    ExpressConnect works by virtualizing PCIe. It’s a 3U high box with slots into which you can plug I/O devices and is coupled with the N1400-PCM High-Speed Switch Module which enables blade servers to expand their PCIe signals outside the chassis. Doing so creates a pool of I/O resources that is separate from the server itself and can be accessed by any server.

    Pettey compared ExpressConnect to Hewlett-Packard’s Virtual Connect, which virtualizes the connection between HP BladeSystem servers and a network but is proprietary to HP BladeSystem. “[ExpressConnect] can do everything HP Virtual Connect can do, only across many platforms and blades and racks. You can run any virtualization platform, any OS, and mix and match servers.”

    Egenera’s Processing Area Network PAN Manager software also virtualizes I/O resources and is available on Egenera’s servers and Dell PowerEdge servers. Dell released its own version recently, called FlexAddress, for its PowerEdge M-series servers.

    David G. Hill, a principal analyst at the Mesabi Group in Westwood, Mass, ranks NextIO’s product highly for data centers with high I/O throughput demands. “NextIO has the greatest impact in processing environments where the bottleneck is I/O performance, at a reasonable price,” Hill said. “The initial benefits are in I/O performance-demanding environments, such as high bandwidth, high-definition video processing, financial modeling and Web 2.0 data center virtualization.”

    A few months ago, I spoke with NextI/O, then waited weeks and weeks for the company to come up with a user reference and some product pricing to no avail. In a case like this, I generally move product information from my My Documents folder to the Recycle Bin, but at face value it appears to be a pretty good technology, and Hill gave it high marks, so I (begrudgingly) decided to post this in case anyone is looking for such a technology.

    Just don’t ask me what users think about ExpressConnect, because I don’t know that there are any. As for pricing, the company suggests contacting marketing@NextIO.com


    July 22, 2008  8:17 PM

    Supercomputers rank high on energy efficiency list



    Posted by: Bridget Botelho
    Green computing, Green500, IBM Roadrunner, Supercomputing, TOP500

    The third edition of the Green500 List includes more supercomputers than ever, showing that high performance can be achieved with power efficiency in mind.The first sustained petaflop supercomputer, IBM Corp.’s Roadrunner, the top-ranked supercomputer in the TOP500 list of supercomputers, also ranked third on the Green500 for its efficiency.

    This proves that performance doesn’t have to come at a high energy price. Wu Feng, a member of both the computer science and the electrical and computer engineering departments in Virginia Tech’s College of Engineering and founder of the Green500, said in a statement that “energy efficiency and performance can co-exist…the last two supercomputers to top the TOP500 are now No. 43 and No. 499 on the Green500.”

    “The Roadrunner supercomputer is akin to having the fastest Formula One race car in the world but with the fuel efficiency of a Toyota Prius,” Feng said in the statement.

    Nearly one in every three supercomputers on the latest Green500 List now achieves more than 100 megaflops/watt,  whereas in the previous edition of the Green500 from February, only one in every seven supercomputers did.

    Also, three supercomputers surpassed the 400 megaflops/watt milestone for the first time. All three machines are based on IBM’s BladeCenter QS22 chassis with the Cell processor.


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: