October 13, 2011
Back in 2009 the Australian government forked over $80 million to fund a critical part of its “Super Science” initiative. Much of this money went towards the establishment of iVEC’s Pawsey Centre Project. This massive undertaking, which should come online in full in 2013, will provide new supercomputing facilities and expertise to support SKA (Square Kilometre Array) research and other high-end science.
The secondary goal of the Project is to demonstrate Australia’s ability to support HPC in order to bolster its bid to host the SKA, which is critically dependent on advanced computing resources.
Among the systems that are designed to support select research projects and support SKA is the University of Western Australia’s iVEC@UWA “big science” supercomputer, which is being overseen by iVEC, a government-funded organization that encourages the adoption and use of high performance computing, and provides access to supercomputing, large-scale data storage and visualization resources. Much of the work is focused on a specific set of research areas, including radioastronomy, high energy physics, oil and gas discovery and urban planning.
The SGI Fornax super, which is part of the Pawsey Centre Project, boasts 96 nodes, each with two six-core Xeon X5650s, an NVIDIA Tesla C2050, 48 GB RAM—and the ability to handle the big data, big science problems that are being hurled from the radioastronomy and geosciences research camps.
According to a recent report, however, even though the system is churning away, it is serving as something of a testbed. As Richard Chirgwin reported, “The demands of ‘big science’ are so intensive, and the data sets so diverse across different communities, that even a "finished" project is also a development platform for new techniques and applications.”
Chirgwin says that “Part of the problem posed by the huge datasets that Fornax users create is that different researchers will be asking different questions of the same, or similar, data.”Data movement, access and finding ways to make using high-end resources are proving to be challenges with the diverse and large sets.
Pawsey Centre systems architect Guy Robinson explained some of the challenges to Chirgwin, noting that “the scientist isn’t rewarded for spending six months solving problems of data access issues that might only get him or her to the “real” problem they’re trying to solve. They should be able to devote themselves to the problems in front of them, with the underlying computer facilities as invisible as possible."
Full story at The Register
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?