July 19, 2011
Much has been written this week about the newest addition to the family of supercomputers at the San Diego Supercomputing Center. The new Trestles system bridges the divide between the Dash and the upcoming Gordon systems and has already been reported to have maneuvered across over 50 research projects.
Weighing in 10,368 cores with a peak speed of 100 teraflops, 20 terabytes of memory and 39 terabytes of flash memory, the new SDSC and Appro-designed super could be another proving ground for the future of flash-based memory in large-scale HPC systems.
The Trestles supercomputer is one of a handful HPC systems making use of flash memory—the same kind of memory that is used in any number of handheld or tablet devices. Allan Snavely, who serves as the associate director of SDSC and co-PI for Trestles explained the reason for choosing the slower-spinning disk technology over other more traditional HPC memory solutions.
“Flash disks can read data as much as 100 times faster than spinning disk, write data faster, and are more energy-efficient and reliable…Trestles uses 120GB flash drives in each node and users have already demonstrated substantial performance improvements for many applications compared to spinning disk.”
Another system at SDSC, the upcoming 1024-node Gordon super will also be incorporating flash into its architecture to make it more adept at solving data-intensive problems with greater speed.
As Matthew Dublin stated this week, part of the reason some supers utilize flash is because of the improvements in I/O speed and more significantly, power savings as they have no moving parts, unlike disk-based memory with its spinning motorized components.”
Richard Moore, deputy director of SDSC said that the system will support a large, diverse group of TeraGrid and other users with projects contingent on rapid turnaround times. He said in a statement last week that “to respond to user requirements for more flexible access modules, we have enabled pre-emptive on-demand queues for application which require urgent access in response to unpredictable natural or manmade events that have a societal impact, as well as user-settable reservations for researchers who need predictable access for their workflows.”
Full story at GenomeWeb
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?