In an article on SearchEnterpriseLinux.com, Cray Canada CTO Paul Terry said: "Cluster vendors would have you believe that their performance is the linear sum of each of their respective GFLOPS [Giga Floating Point Operations Per Second]. Most cluster [experts] know now that users are fortunate to get more than 8% of the peak performance in sustained performance."
Responding to that article, Linux cluster vendors said that GFLOPS don't matter. One said: If your supercomputer isn't gigaflopping fast enough, add some Linux servers to it. The efficiency of clusters versus traditional supercomputers can be referenced using publicly available data such as the Top500.org list."
So, what's really going on here? How high can Linux go in HPC? When does a company/organization really have to choose between the two? What's the difference between Linux clusters like those offered by Oracle RAC or Linux Networx and Cray's supercomputers, and when would that difference make a difference to users?
Free Guide: Managing storage for virtual environments
Complete a brief survey to get a complimentary 70-page whitepaper featuring the best methods and solutions for your virtual environment, as well as hypervisor-specific management advice from TechTarget experts. Don’t miss out on this exclusive content!
Discuss This Question: