October 15, 2013
The University of Southampton just switched on its fourth generation supercomputer, Iridis4. This HPC cluster, worth £3.2 million, is the most powerful academic supercomputer in England and it's the third largest university-based system in the UK.
Iridis4 is a powerful simulator. As one of the main HPC resources at the University of Southampton Supercomputing facility, Iridis4 is available to research students and staff members who require computing capability substantially greater than a standard PC.
The system is being used for a wide range of problems that directly affect humanity in areas as diverse as engineering, archaeology, medicine and computer science. In the first year of operation, the supercomputer is expected to contribute to 350 projects.
Iridis4 was designed and built by IBM Intelligent Cluster solutions in partnership with HPC integrator OCF Plc. Four times more powerful than its predecessor Iridis3, the fourth-generation HPC cluster sports 12,200 Intel Xeon E5-2670 processor cores, 24 Intel Xeon Phi coprocessors, a petabyte of disc space, and 50 terabytes of memory, connected by an InfiniBand network. Iridis4 is one of the first machines in the UK to employ Intel Xeon Phi coprocessors with each chip adding a teraflop of power.
In this short video highlighting the system's debut, Richard D Sandberg, Professor of Fluid Dynamics and Aeroacoustics, explains how HPC and Iridis4 are advancing turbulence research. The Professor employs high-performance computing to study sources of noise caused by aerofoils, such as in fan blades in aircraft engines, flaps on air frames and wind turbines.
"Turbulence is incredibly difficult to understand," reports Dr. Sandberg. "We have a wide range of length-scales and time-scales with big structures and small structures and all of these must be captured by a simulation. In order to do that, you need large computers to solve your problem. We couldn't do this on a standard desktop computer. We really need supercomputing to tackle any kind of relevant problem."
Professor Hans Fangohr, Head of Computational Modelling, Engineering and the Environment, explains that one way to visualize the sophisticated nature of this work is to identify one particular job that is running. He locates one that is using 384 processors to solve a problem – it's akin to having 384 single desktop machines working on one processing job, notes the professor. Running the same job on a single machine would take two years and seven months to complete. "It's basically a time machine," says Professor Fangohr, referring to the new supercomputer.
Stent research is another essential case study for Iridis4. Coronary stents are cylindrical mesh devices that keep open the walls of diseased arteries. It's a delicate process that requires inflating a tiny balloon inside the artery to re-open its walls. Occasionally the stent does not align properly to the artery wall, a problem called stent malapposition. Professors Neil Bressloff (in Engineering), Nick Curzen (in Medicine), and PhD student Georgios Ragkousis are using the new supercomputer to run many simulations of this complex problem to reduce complications and improve outcomes.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?