Posted by: ITKE
botnets, HPC, Linux kernel, Sandia, Security, Thunderbird supercomputing cluster, Virtualization
In a feat of Linux strength, computer scientists at Sandia National Laboratories in Livermore, Calif., announced that they had run more than a million Linux kernels as virtual machines. Previously, researchers had only been able to run up to 20,000 kernels concurrently. The scientists used virtual machine (VM) technology and its Thunderbird supercomputing cluster for the demonstration.
The aim of the project is to model malicious botnets, which are often difficult to analyze because they are geographically spread all over the world, explains Sandia’s Ron Minnich. The more kernels that can be run at once, said Minnich, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to virtualize and monitor a cyber attack,” he said.
Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow researchers to see how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.
A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.
“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”
To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.
The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. This successful demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can start before the hardware technology is mature.
“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”
Sandia’s researchers plan to take their newfound capability to the next level.
“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”