September 04, 2013
Members of the BYU Supercomputing team recently posted a tutorial for getting started with SLURM, the scalable resource manager that has been designed for Linux clusters.
SLURM is currently the resource manager of choice for NUDT’s Tianhe-1A, the Anton Machine built by D.E. Shaw Research, and other clusters, including the Cray “Rosa” system at the Swiss National Supercomputer Centre and Tera100 at CEA.
In essence, SLURM’s functions as an allocation mechanism to divvy up resources on both an exclusive and non-exclusive basis, as well as a framework for starting, executing and monitoring jobs on a set of designated nodes. It also manages scheduling conflicts by handling the queue of jobs.
As Dona Crawford from Lawrence Livermore noted about their use of SLURM for their BlueGene/L and Purple systems, using SLURM reduced “large job launch times from tens of minutes to seconds.” She went on to note that “This effectively provides us with millions of dollars with of additional compute resources without additional cost. It also allows our computational scientists to use their time more effectively. SLURM is scalable to very large numbers of processors, another essential ingredient for use at LLNL. This means larger computer systems can be used than otherwise possible with a commensurate increase in the scale of problems that can be solved. SLURM's scalability has eliminated resource management from being a concern for computers of any foreseeable size. It is one of the best things to happen to massively parallel computing."
One of the advantages that SLURM users point out is that it’s relatively simple to get started and there are a wide array of modular elements that help to extend the core functionality. For those who want a bare-bones setup (as the one described in the accompanying video), it takes well under an hour to get it up and running.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?