April 02, 2013
The large-scale classical physics problems that remain unsolved must for the most part be run in parallel by high-performance machines like the Kraken, which is run by the National Institute for Computational Sciences (NICS) at the University of Tennessee Knoxville under the guidance of the NSF.
y. Literally millions of variables culled from billions of particles combine to make this type of research unreasonable for ordinary computational physics.
High performance computers such as Kraken are being used by physicists to understand some of these problems that also represent fundamental origin questions. Namely, they’re investigating protoplanetary turbulence and how it eventually leads to the creation of planets and stars.
“With HPC resources like Kraken, we can run several of these high-resolution computer simulations simultaneously,” said the University of Colorado’s Jake Simon, principal investigator of the project aiming to understand protoplanetary turbulence. “This speeds up our research considerably.”
Planets are formed out of protoplanetary disks, objects comprised of gases where the mass gravitates toward the center while the angular momentum (classically defined as the moment of inertia times the angular velocity) increases farther away from the disk. The rotating mix of electrons and ions in the protoplanetary disk creates magnetic fields, which creates computational challenges and can wreak havoc on physical simulations.
“By understanding the nature of the gases,” Simon said “we can learn something about how small particles interact with each other, coagulate to become larger particles and then ultimately form planets.”
For example, one of the challenges for which Simon noted that proper algorithms have not been developed is a phenomenon called the Hall Effect, an area of protoplanetary disks where electrons are tied to the magnetic fields but the ions are not for some reason.
As such, Simon’s team aims to find the correct numerical algorithms to represent things like the Hall Effect. What is encouraging is that they have used the Kraken supercomputer to understand things like ambipolar diffusion, which can today be described as the dragging of electrons and ions relatively uniformly through a magnetic field in the outer regions of the disk.
“The degree to which this happens has been explored with our high-resolution numerical simulations that we have run on the Kraken supercomputer,” Simon said. “We believe we now have a much better understanding of how disks behave in their outer regions, far from the central star.”
Simon’s team has run these calculations, algorithms, and simulation across 4 million compute hours on Kraken, averaging 585 cores per run and a peak of 18,432 cores. Allocating this compute time and space is the NSF’s XSEDE (Extreme Science and Engineering Discovery Environment).
“In our simulations, we do not need to simulate the entire disk, but only a small patch of it,” Simon said on the determination of ambipolar diffusion. “However, this patch does have to be somewhat large and has to have a certain number of resolution points per unit length. This equates to calculations that can be performed only on the largest computers by distributing parts of the calculation across multiple CPUs.”
The video below is a visual representation of the fluid dynamics, turbulence, and magnetic field properties of such a patch of protoplanetary matter. The goal is to eventually understand how the whole system works by investigating sizable patches like this.
Their biggest success so far is understanding that for their observations to be accurate, there must exist a large magnetic field perpendicular to the disk. Determining how that magnetic field is formed and its implications could be the next step in understanding the origin of planetary bodies.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?