Posted by: Bridget Botelho
AMD, Cisco Unified Computing System, CPU, Dell, front side bus, HyperTransport, integrated memory controller, Intel Corp., Intel Nehalem, multicore processor, multithreading, quick path interconnect, Rackable Systems, x86 server, Xeon processor
Intel may launch its next generation multi-core Xeon processors, code-named Nehalem, on Monday.
The company sent out invitations to a live webcast on March 30 “for the launch of a groundbreaking new server architecture.” If that doesn’t give it away, some server vendors have already announced products based on the Nehalem processors, including Cisco, which will use the Intel Xeon CPU’s in their upcoming Unified Computing System’s blade servers. Rackable Systems already introduced CloudRack systems based on Nehalem, and Dell is expected to introduce Nehalem-based systems this week.
In earlier disclosures about Nehalem chips for x86 servers, Intel said the processor will have two, four or eight processing cores and provide better scalability than previous generations. It will also have scalable cache sizes and simultaneous multithreading, or Hyper-threading, which is already available on other Xeon processors.
While Intel prides itself on introducing multi-core processors at a faster pace than competitor AMD, some of the most significant enhancements to the new Xeon processor have existed in AMD chips for years.
For example, one of the major changes with Nehalem is integration of the memory controller into the CPU. This replaces the legacy Front Side Bus, which is a known culprit in traffic bottlenecks issues. AMD has been offering an integrated memory controller –called Direct Connect Architechture — in its Opteron CPUs for years now.
Another feature in Nehalem is the QuickPath Interconnect (QPI), which will give the chip faster access to a lot more bandwidth. This feature is similar to AMD’s HyperTransport technology, which has been around for a number of years as well.
That said, by adding QPI and an integrated memory controller, Nehalem will have access to a lot more bandwidth than its predecessors without relying on tons of cache memory, according to an ARS Technica report on Nehalem.
More importantly, what all of this means for end users is significantly better performance for applications that can take advantage of multithreading and multiple processing cores.