June 23, 2009
ARMONK, NY, June 23 -- For a record-setting tenth consecutive time, an IBM system holds the number one position in the ranking of the world's most powerful supercomputers. The IBM computer built for the "roadrunner project" at Los Alamos National Lab -- the first in the world to operate at speeds faster than one quadrillion calculations per second (petaflop) -- remains the world speed champion.
IBM also declared its intent to break the exaflop barrier, and announced that it had created a research 'collaboratory' in Dublin, in partnership with the Industrial Development Agency (IDA) of Ireland, which is focused on both achieving exascale computing and making it useful to business. An exaflop is a million trillion calculations per second, which is 1000 times faster than today's petaflop-class systems.
The latest semi-annual ranking of the World's TOP500 Supercomputer Sites was released today during the International Supercomputing Conference in Hamburg, Germany. Results show the IBM system at Los Alamos National lab, which clocked in at 1.105 petaflops, is nearly three times as energy-efficient as the number 2 computer to maintain similar levels of petascale computing power. IBM's number one system performs 444.9 megaflops per watt of energy compared only 154.2 megaflops per watt for the number 2 system.
Additional highlights from the list include:
IBM sets sights on Exascale Systems for a Smarter Planet
Having ushered in the petaflop era a year ago, IBM has established a Research collaboratory in Dublin, Ireland, in collaboration with the IDA, focused on achieving exascale computing and making it beneficial for businesses with technologies like stream computing to analyze massive amounts of real-time data. This is the first collaboratory that IBM has announced, and the company intends to create more around the world.
"It's an honor to hold the record for the world's most powerful computer, but what is critical is building supercomputers that help advance the global economy and society at large," said David Turek, vice president, IBM Deep Computing. "IBM was the first to break the petaflop barrier and we will continue to apply lessons learned as we march toward the exaflop barrier."
An IBM collaboratory is a laboratory where IBM Researchers co-locate with a university, government, or commercial partner to share skills, assets, and resources to achieve a common research goal.
IBM Researchers are already at work with government and academic leaders to develop exascale systems that will help solve the complex business and scientific problems of the future. This research collaboratory will enable IBM supercomputing and multidisciplinary experts to work directly with University researchers from Trinity College Dublin, Tyndall National Institute in Cork, National University of Ireland Galway, University College Cork and IRCSET, the Irish Research Council for Science, Engineering and Technology to develop computing architectures and technologies that can overcome current limitations -- such as space and energy consumption -- of dealing with the massive volumes of real-time data and analysis.
The technical research will explore innovative ways of using new memory architectures, interconnecting technologies and fabric structures, and will evaluate business applications that would benefit from an exascale streaming platform.
While high performance computing today primarily focuses on scientific applications in areas such as physics or medicine, the exascale research in Dublin will also focus on how these new powerful computing systems can be applied to solving complex business problems. The research will include both technical and applications research. For example, the application research for exascale computing will study financial services using real-time, intelligent analysis of a company's valuation developed from business models using data from investor profiles, live market trading and RSS news feeds.
"IBM led the industry in breaking the petaflop barrier last year," continued Turek. "Developing exascale systems challenge space and energy limitations, requiring extremely sophisticated systems management and application software that can take advantage of this computational capability. This new collaboratory is already at work solving some of these issues."
As future computing platforms are expected to produce orders of magnitude more power dissipation, researchers believe that efficiently cooling these large systems will be one of the most important factors to next generation development. Making computing systems and datacenters energy-efficient is a staggering undertaking.
In fact, up to 50 percent of an average air-cooled datacenter's carbon footprint or energy consumption today is not caused by computing but by powering the necessary cooling systems to keep the processors from overheating -- a situation that is far from optimal when looking at energy efficiency from a holistic perspective. IBM has numerous leading edge research projects underway that are addressing these "energy aware" hurdles.
Just today, IBM and the Swiss Federal Institute of Technology Zurich unveiled plans to build a first-of-a-kind water-cooled supercomputer that will directly repurpose excess heat for the university buildings. The innovative system, is expected to decrease the carbon footprint of the system by up to 85 percent and estimated to save up to 30 tons of CO2 per year, compared to a similar system using today's cooling technologies.
IBM provides a broad portfolio of systems, storage and software technology to the supercomputing market, more than any other vendor. The company's innovative HPC solutions have created a new scientific force for tackling the world's grand challenges around climate science, the hunt for new sources of energy, creating new gene-based medicines, and have made significant contributions to basic scientific inquiry in physics and biology.
The "TOP500 Supercomputer Sites" is compiled by Hans Meuer of the University of Mannheim, Germany; Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory; and Jack Dongarra of the University of Tennessee, Knoxville.
For more information about IBM supercomputing, visit www.IBM.com/deepcomputing.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?