January 21, 2010
Just a quick bit of news on the reconfigurable computing front.
The six-month old Novo-G supercomputer at the University of Florida is being upgraded, doubling the number of FPGA devices and RAM capacity. The University of Florida is the lead research institution for the NSF Center for High-Performance Reconfigurable Computing (CHREC), the organization that built Novo-G.
The original installation in July 2009 consisted of 24 quad-FPGA GiDEL boards, providing a total of 96 Altera Stratix III FPGAs and 408 GB of on-board memory. These boards were hosted in 24 servers hooked together with DDR InfiniBand. In a few weeks, an additional 24 boards will be added, bringing the total to 192 FPGAs and 816 GB of memory.
According to CHREC Director Dr. Alan George, who emailed me earlier in the week about the upgrade, the additional FPGA hardware won't impact the rest of the computing infrastructure. "Interestingly, because of the energy efficiency of these FPGA devices, we are doubling the number of FPGAs, with no upgrade needed in the power supplied by the servers, racks, or wall circuits, and no need for any cooling upgrades (i.e., all the defaults on power and cooling can handle the device doubling without upgrade)," he wrote.
By the way, the same quad-FPGA boards that are in the Novo-G super are also being used at other CHREC research schools, with each institution getting one or two boards. The rationale here is that the work done by local researchers can then more easily be scaled up to the big Novo-G machine in Florida.
Dr. George also mentioned that a new international research consortium for Novo-G has been formed, called the Novo-G Forum. According to the forum Web site, its purpose is to bring together researchers and vendors to showcase the advantages of large-scale reconfigurable computing on the Novo-G. This year Dr. George expects the forum to attract research apps and tools, which can then be shared with the CHREC membership and presented at SC10 in New Orleans.
Dr. George also indicated that they are looking into how the upgraded Novo-G will handle a new bioinformatics algorithm, called ESPRIT. Using the algorithm for large-scale gene sequencing, like metagenomic studies of microbial communities or epidemiological studies of patients, requires TOP500-level supercomputing to be of practical use. In general, this type of workload is tailor-made for reconfigurable computing, and Dr. George believes the new 192-FPGA Novo-G will be able to outperform any other computer on the planet running these applications, and will do so using hundreds of times less power and cooling than a high-end supercomputer.
Posted by Michael Feldman - January 21, 2010 @ 6:53 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?