September 16, 2010
Trying to fit a model of an entire galaxy inside a computer is even harder than it sounds -- even when that computer is an 800-core cluster with over a terabyte of memory. The researchers at the Durham University's Institute for Computational Cosmology (ICC) know this well, because that just happens to be what they're trying to do. An article in silicon.com this week documents how cosmologists have to develop creative modeling strategies to deal with the limitations of HPC machines.
ICC researchers have access to a cluster with 800 AMD processor cores, 1.6 TB of memory, and 300 TB of disk storage. That's a decent-sized machine, but for galaxy formation simulations, the researchers are constantly butting up against hardware limitations. Take disk storage, for instance. A single simulation run on the effect of dark matter on galaxy formation can produce 20 TB of data, which mean the scientists are constantly deleting older data or backing it up to tape. And according to the article, the cluster is not big or powerful enough to even handle large scale models:
Physicists have to simplify the cosmological models they use in order to get ones that produce data sets small enough to be accurately processed by the 64-bit chips in the supercomputing cluster, and which can fit into the cluster's available memory.
Nevertheless, this is better than what most cosmologists had available to them even a few years ago. At that time they could only simulate a few thousand particles per galaxy (so each particle had to represent 10,000 to 100,000 stars). Today that granularity is two orders of magnitude better.
Better yet, the Institute is getting a new cluster in December that has a lot more compute power, memory and storage than their current setup. The new hardware will enable the researchers to create higher fidelity models and "get a much more realistic calculation".
Full story at silicon.com
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?