May 05, 2008
Lawrence Berkeley National Laboratory and Tensilica Inc. have announced a partnership to research exascale supercomputing design. The program will combine LBNL's supercomputing smarts with Tensilica's expertise in microprocessor technology. While not exactly a household name, Tensilica is gaining some notoriety as a provider of ultra-low-power (i.e., less than one watt) configurable processors for mobile and computing devices and special-purpose computing appliances.
From the announcement:
The team will use Tensilica's Xtensa LX extensible processor cores as the basic building blocks in a massively parallel system design. Each processor will dissipate a few hundred milliwatts of power, yet deliver billions of floating point operations per second and be programmable using standard programming languages and tools. This equates to an order-of-magnitude improvement in floating point operations per watt, compared to conventional desktop and server processor chips. The small size and low power of these processors allows tight integration at the chip, board and rack level and scaling to millions of processors within a power budget of a few megawatts.
This takes the low-power processor approach, exemplified by the SiCortex and IBM Blue Gene architectures, to the next level in order to overcome the power and cost limitations inherent in applying the current crop of commodity processors to multi-petaflop systems.Tensilica's Xtensa chip makes even the MIPS and ARM processors look massive by comparison.
Berkeley researcher Horst Simon says our present trajectory will "make current approaches for supercomputing unsustainable." As you might suspect, the hardware side of this is actually the simplest part. Because of the massive scalability involved, new software models and tools will have to be invented to make this new paradigm workable.
I wrote an article about Berkeley's interest in Tensilica technology and massively parallel architectures back in February. For more background, check out the NERSC slides [PPT] about the looming power and cost crisis in petascale computing.
Posted by Michael Feldman - May 04, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?