October 23, 2013
As the world prepares for supercomputers that are one-hundred times more powerful than today's best machines, speculation abounds as to which nation will be the first to celebrate this important milestone. Important not just for the science and technology advances that will surely ensue, but for the symbolic and aspirational significance of such an accomplishment. But what if the first party to reach this goal were not a single nation or an economic block like the EU, but were instead a collection of global interests taking part in a collaborative research endeavor?
As an international science and engineering project focused on building the world's largest radio telescope, the Square Kilometre Array (SKA) certainly fits the bill. In a recent article, Dr. Happy Sithole, director of the Centre for High Performance Computing (CHPC) in Cape Town, South Africa, a key asset in the project, predicts that "the world's next biggest supercomputer will be in South Africa, and it will have exascale capability."
The claim that South Africa will likely host the world's first exaflops-class system owes to the country's essential role in the Square Kilometre Array (SKA) radio telescope project. The outcome of the SKA site bid was revealed on May 25, 2012, when the members of the SKA Organisation announced that the SKA telescope would be split between Africa and Australia with all of the Phase 2 dishes destined to be built in Africa.
Marking the occasion, Professor Justin Jonas, Associate Director: Science and Engineering at SKA South Africa, stated: "This is a very significant moment for South Africa, and Africa as a whole. It is also an important milestone in the international SKA project – the overall winner here is global science. For South Africa and our African partner countries this represents a new era, where Africa is seen as a science destination and takes its place as an equal peer in global science."
Since it was founded in 2007, the Centre for High Performance Computing has evolved into a first-class supercomputing center, but in order to keep pace with the huge volumes of complex data coming out of the SKA radio telescope project, the CHPC will require even more processing power, a lot more. So says Dr. Happy Sithole, director of the center.
The center's current supercomputer clocks in at 61.4 teraflops (Rmax), just shy of TOP500 territory. However, Dr. Sithole says the center will need to expand its computational capacity soon as the current system is almost maxed out.
Dr. Sithole remarks that the South African HPC center is no stranger to big science jobs. The supercomputer has been involved in climate change research, mineral beneficiation, bioinformatics and in the creation of 3D animated movies. It also helped design the SKA dish and crunched data for CERN.
The SKA project comes with a new set of challenges for the site. The SKA will be 50 times more sensitive than any former radio device and more than 10,000 times faster than current generation instruments. The SKA will be so sensitive that it will be able to detect an airport radar on a planet 50 light years away. This perceptive skill translates into more data – exabytes of data per day – all of which needs to be processed and analyzed.
But the benefit to science is clear. Once the SKA is completed, it will collect data from deep space dating back to the origin of the Big Bang more than 13 billion years ago. The combination of these massive telescopes and next-generation supercomputing powers will unlock the mystery of how galaxies have evolved over the eons.
The SKA project is truly a collaborative endeavor with 10 member countries so far and nearly 100 organizations working together to construct and finance the telescopes and related technology. The capital cost alone has been estimated at 1.5 billion euros.
Currently still in the pre-construction design phase, the project will be carried out in two stages. SKA 1, which will be operational in 2020, specifies data requirements of 100 petaflops and up, while SKA 2, set to debut in 2024, will generate exabytes of raw data per day. This is an extreme data analytics challenge that requires the scale and speed of an exaflops-level computing solution.
While Dr. Sithole is confident that his center will be the first to build this next-generation machine, he did not reveal much in the way of specifics as to how the numerous exascale challenges (hardware, software, applications, power, etc.) would be tackled or funded. There are more clues, however. Earlier this year, the Square Kilometer Array (SKA) South Africa team announced it had joined ASTRON (the Netherlands Institute for Radio Astronomy) and IBM as part of a 32.9 million EURO, four-year collaboration to research ultra-fast, low-power exascale computer systems capable of handling the massive amount data generated by the world's largest and most sensitive telescopes.
An exact timeframe was also not provided, but to support the SKA 2 launch, the IBM-ASTRON-led exaflop-class supercomputer would need to be operational by 2024. There are a lot of other nations vying to hit this major milestone sooner than that – and these nations (China, Japan, US, EU) have already assembled multi-petaflop machines, something that South Africa, which is currently still in teraflops territory, has not yet done.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?