December 06, 2011
Gordon, the largest flash memory-based computer on the planet, was officially launched at a ceremony that took place on Monday at the San Diego Supercomputer Center (SDSC). Two years in the making, and backed by a $20 million Track 2 grant from the National Science Foundation (NSF), Gordon represents the first really big purpose-built supercomputer for data-intensive applications.
Mark Seager, formerly of Lawrence Livermore National Laboratory and now Intel's CTO for the HPC Ecosystems group, spoke at the event, saying that the data-intensive technologies that are being pioneered in Gordon are destined to make their way into the wider enterprise market. But, he noted, they have special relevance to the HPC community. "We see big data as a new frontier in high performance computing," said Seager.
The intention of SDSC and the NSF is to draw in data-intensive science codes that have never had a platform this size to push the envelope. This is particularly true of in genomics, an application set that was foremost in the minds of the system engineers when the machine was being designed. Genomics is the classic "big data" science problem, and is the one most frequently cited in HPC circles as suffering from the data deluge crisis. Other application areas like graph problems, geophysics, financial market analytics, and data mining are also expected to be important domains for Gordon.
Hardware-wise, the system is a souped-up Appro HPC cluster, using the vendor's third-generation Xtreme-X architecture and outfitted with Intel's new 22nm "Sandy Bridge" Xeon E5 CPUs (which, by the way, are still are not generally available). Consisting of 1,024 dual-socket, nodes with 64 GB of DDR3 memory, Gordon delivers a peak performance of 280 teraflops. That's not exactly top tier computing in the petascale age, however it was enough to earn the system 48th place on the latest TOP500 list.
But it's the flash memory set-up that makes Gordon a data monster. The system is outfitted with over 300 TB of Intel solid state disks, spread over 64 "I/O nodes." According to SDSC director Mike Norman, that's enough flash capacity to store the entire Netflix movie catalog three times over. It's also enough to hold 100,000 human genomes, which is probably bigger than that particular data set as it exists today.
More impressive is the aggregate IOPS performance of the machine. At the ceremony on Monday, Norman cranked up all 64 I/O nodes, demonstrating a peak output of 36 million IOPS. At that rate, you could download 220 movies per second.
The other unique aspect to Gordon is its use of ScaleMP's "Versatile SMP" (vSMP) technology. It allows users to run large-memory applications on what they call a "supernode" -- an aggregation of 32 Gordon servers and two I/O servers, providing access to 512 cores, 2 TB of RAM and 9.6 TB of flash. To a program running on a supernode, the hardware behaves as a big cache coherent server. As many as 32 of these supernodes can be carved from the machine at one time. According to ScaleMP founder and CEO Shai Fultheim, Gordon is the largest system in the world that is deployed with its technology.
The flash device being employed is Intel's new iSolid-State Drive 710, which was launched in September at the Intel Developer Forum in San Francisco. The 710 uses Intel's High Endurance Technology (HET), which is the chipmaker's version of enterprise multi-level cell (eMLC) flash memory that other flash vendors are now offering. Like eMLC, the HET flash features the performance and resiliency of single-level cell (SLC) flash, but at a much lower cost. SDSC also developed its own flash device drivers to maximize performance of the SSD gear.
Inserting this much flash memory into a supercomputer had never been attempted before, and this aspect was probably the biggest risk for the project. When they began the Gordon effort two years ago, flash memory was just starting to make its way into enterprise storage and was an expensive and unproven technology. The $20 million in funding for a flashed-laden supercomputer was predicated on projections that the cost and density of NAND memory would make a multi-hundred terabyte SSD deployment feasible by 2011.
That more or less turned out to be the case, but the global recession and the meteoric rise of smartphones and other mobile computing devices over the last couple of years spiked the price flash memory as supplies dwindled. The recent commercialization of enterprise-capable MLC flash, as in the Intel SSDs, turned out to be something of a gift for Gordon, allowing SDSC to increase the initial flash capacity of 256 TB to more than 300 TB.
SDSC was also somewhat fortunate to have found a willing partner in Appro, a tier 2 system vendor that was prepared to build a rather unconventional HPC cluster. According to SDSC associate director Allan Snavely, they approached both IBM and Cray about taking on Gordon, but both vendors essentially said they were unwilling to tweak their product roadmaps for a single $20 million contract. Appro, of course, is hoping Gordon is not a one-off machine.
Although the system was officially launched on Monday, it is currently undergoing acceptance testing and is expected to be available for production use by XSEDE users on January 1.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?