July 06, 2007
A report in last week's issue of HPCwire addresses the current discussion of the supercomputing strategy in Switzerland and how it will be realized and financed over the next four years. The report offers little information about the true issues at stake and misleadingly connects them to past allegations. As co-director of the Swiss Supercomputing Centre (CSCS), I would like to offer a more informed and untainted view of developments within Switzerland.
The planning of the 2008-2011 HPC performance period began in May 2006 with a mandate of the Secretary of State to define the needs of the HPC community and the national role of CSCS. A Strategy Project Group headed by professor Margaritondo, vice-president of EPF Lausanne, was installed by the ETH board, and the Swiss federal government decided how the national strategy would be financed. A first report of the Project Group was presented to the Secretary of State at the end of April 2007 and then to the Federal Commission for Science and Education (WBK) at the beginning of May. The Steering Committee, including the presidents of the ETH board, the ETH Zurich and the University of Lugano (USI), received the final report on June 19th. This report was ratified on July 4th by the ETH board and then forwarded to the Secretary of State.
The plans for the next four years (2008 through 2011), as stated in the message of the Swiss federal government to the Parliament concerning the support of research, education and innovation, officially define CSCS as the national centre for supercomputing. The message also states that CSCS will operate the most powerful supercomputer for research within Switzerland. The budget for the 2008-2011 HPC performance period adds up to 150 million Swiss francs, 70 million on supercomputer investments, 50 million on a new building for CSCS to host HPC systems at the petaflop level, and 30 million for furthering education and research and establishing a national HPC network with capacity computing nodes at universities. The ultimate goal is to enable the establishment of a petascale system at CSCS in 2010-2011 in support of the increasing needs of HPC dependent research at the universities and the federal schools of technology. In addition to this extra funding, the budget for personnel and operating costs of CSCS will remain at their current levels.
CSCS is preparing for the national strategy. The management is finalizing a reorganization of the Centre along operative units to enhance performance and to favor the further development of supercomputing expertise at CSCS as requested by the strategy group.
These developments are based on solid past performance, as documented by the fulfillment of the performance agreement with ETH Zurich for 2004-2007. See pages 10-13 of the Annual Report for 2006 and the excellent results of the peer review conducted in Summer 2006. All nine points of the performance agreement with ETH Zurich were fulfilled before the end of the four-year period period as stated in the 2006 Annual Report, and the peer review acknowledged that "a considerable turnaround of CSCS has been undertaken," and recognized that "the CSCS is a central resource for Swiss science" (final remarks of the evaluation report, page 37).
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?