April 08, 2013
All around the world, it's a similar story: funding for federal labs is constrained. Op-ed pages are full of pleas from lab managers and scientists across the R&D spectrum highlighting the benefits of a healthy investment strategy. The latest such missive comes from Ian Gibson, chief executive of the NSW (University of New South Wales) university consortium Intersect in Australia, but it could just as easily have been written by any research head in the US.
The crux of Gibson's message is the importance of a balanced funding strategy. He characterizes the current allocation process as "lumpy" – noting that it comes in waves that don't match up to the longer-term funding realities of federal supercomputing installations. While the initial system money is accounted for, operational costs such as staffing and maintenance are underestimated.
System managers everywhere will understand the plight of Intersect's supercomputers, which are oversubscribed and struggling to keep pace with demand. In order to maintain cutting-edge science programs, labs need a reliable funding stream.
"Lumps of money come at various times and people can't really plan. IT infrastructure for something like a supercomputer will last for three years and, if you stretch it out, a little bit longer, so having a good solid understanding about where things are going in the future is really important," Gibson tells the Financial Review.
The Australia government signed a $1.1 billion ($1.15 billion US) agreement to fund a big science initiative in 2009. Of that $80 million went to the Pawsey supercomputer in Perth and the National Computational Infrastructure project in Canberra. Intersect received $1 million to purchase its newest machine, the Orange supercomputer, which is 20-times more powerful than its predecessor.
Gibson welcomes the funding, while at the same time stressing the importance for a multi-year strategy. He says that a million-dollar computer might last four years, but the operational and training costs can run even higher. He also believes that the move to more and more complex workloads in domains such as human genetics and climate science will put an even greater burden on resources. “The ongoing human capability is really critical and having a long term vision of how that's funded is quite important to maintaining the capability,” states Gibson.
He urges decision makers to engage in "holistic planning" that takes into account both "the capital equipment and the operating environment."
Full story at Financial Review
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?