A University of Delaware research team has received a three-year, $750,000 grant from the National Science Foundation to purchase a supercomputer — named Chimera — for high performance scientific computation.
Chimera, which was built by Penguin Computing of California, contains 3,168 processor cores and is the largest supercomputer on campus. It was delivered Oct. 19 and is expected to be fully operational this semester.
“Parallel computing plays a central role in nearly every science and engineering endeavor,” said Stephen Siegel, assistant professor in UD’s Department of Computer and Information Sciences and principal investigator for the project, who hailed the supercomputer as “a great leap forward for computational science at the University of Delaware.”
Siegel said today’s leading supercomputers comprise hundreds of thousands of processors, and in the near future are expected to comprise millions of processors.
“Researchers at the University of Delaware are working to harness this vast computing power to make ground-breaking scientific discoveries,” Siegel said.
The new computing system will be used by UD researchers who share a common interest in scalable parallel computation to explore a variety of scientific applications, Siegel said.
Those applications include the simulation of emulsion microstructure for environmental pollution control and material synthesis; multiscale cloud microphysics and dynamics for more reliable weather and climate prediction; “ensemble analysis” in oceanic forecasting; electronic structure calculations for atoms and molecules; computational electromagnetism; and computational carbon nanoelectronics.
The new computing system will be used to optimize the current algorithms in each application, help identify limitations to scalability, and explore new ideas to enable scaling to hundreds of thousands of processors and to take advantage of hybrid architectures, Siegel said.
Siegel said the system also is expected to motivate collaboration among faculty interested in application problems and faculty with expertise in computer science.
The Chimera team
Joining Siegel as co-principal investigators are Peter Monk, Unidel Professor of Mathematical Sciences; Martin Swany, associate professor in the Department of Computer and Information Sciences; and Krzysztof Szalewicz, professor in the Department of Physics and Astronomy.
Senior personnel involved in the project are Branislav Nikolic, associate professor in the Department of Physics and Astronomy; Bruce Lipphardt, assistant professor in the School of Marine Science and Policy; David Saunders, professor and chair of the Department of Computer and Information Sciences; Dennis Kirwan, Mary A.S. Lighthipe Chair in the School of Marine Science and Policy; James T. Kirby, Edward C. Davis Professor in the Department of Civil and Environmental Engineering; Kausik Sarkar, associate professor in the Department of Mechanical Engineering; Lian-Ping Wang, professor in the Department of Mechanical Engineering; Louis F. Rossi, associate professor in the Department of Mathematical Sciences; and Tian-Jian Hsu, associate professor in the Department of Civil and Environmental Engineering.
The funding is provided through the NSF Directorate for Computer and Information Science and Engineering’s computer research infrastructure program.
Chimera research
Siegel said he plans to use Chimera to explore new techniques in the verification of scientific software while Szalewicz will use the cluster to perform investigations of weak intermolecular interactions responsible for the structure and properties of condensed phases, for surface phenomena, and for life processes.
Hsu conducts research in three-dimensional simulations of convective sedimentation in salt-stratified river plume, sediment resuspension and transport in the wave-current bottom boundary layer and various environmental processes related to gravity flow. “These simulations allow us to investigate the main processes determining the fate of terrestrial sediment in the coastal ocean,” he said.
Kirby’s research group is involved in modeling the distribution and volume of air bubbles near the ocean surface in a range of settings. Efforts include the study of air bubbles on optical visibility in shallow water and the surf zone, the impact of bubbles on underwater acoustic communications and the contribution of bubbles to gas transfer processes between atmosphere and ocean. Chimera will be used to model bubble processes in coupled atmosphere-ocean simulations and will employ models developed in-house and through collaborations with ETH-Zurich, Los Alamos National Laboratory and the National Center for Atmospheric Research.
Nikolic plans to explore novel graphene-based nanostructures. “The 2010 Nobel Prize in Physics has been awarded for the very recent discovery of a new two-dimensional crystal of carbon atoms, termed graphene, whose unusual electronic properties have been mapped via transport measurements on various micron-size devices,” Nikolic said. “The new frontier in this field was opened in 2008 by the fabrication of sub-10-nanometer wide graphene wires which are expected to open new avenues for nanoelectronics, thermoelectrics and spintronics. Our group has been working on first-principles quantum transport simulations of all three classes of such devices via massively parallel codes whose development typically consumes up to a million hours on remote supercomputing facilities.”
Wang will make use of Chimera to conduct hybrid simulation of turbulent collision of cloud droplets and particle-resolved simulation of turbulent particle-laden flow, noting that both represent multiscale problems that can greatly impact environmental applications such as weather prediction, dust storm, sediment transport and marine snow.
Origins of Chimera project
Siegel said the Chimera project came about following meetings organized by Monk for faculty working in computational science, when it became apparent that many had been scaling their experiments to 32 or 64 processors when additional computing power was in fact needed.
“I thought a large shared supercomputer would not only be a more efficient use of resources, but would also allow the scientists to run their simulations at much larger scales, meaning everyone could do more and better science,” Siegel said. “While some of the faculty have limited access to machines at national centers, I felt strongly that if the University is to become a leader in this field, we needed our own state-of-the-art system. We could tailor it to our own research needs, use it to train graduate students and even use it in some undergraduate classes.”
Siegel said the main goals of the project involve “finding ways to scale computational scientific algorithms to thousands of processors, so they can take advantage of the new hardware trends,” adding that “large parallel computers provide an enormous amount of raw horsepower, but designing algorithms and software that can effectively harness that power is very much an open challenge.”
Article by Neil Thomas
Photos by Ambre Alexander