December 02, 2005
This year marks the 15th birthday of the Edinburgh Parallel Computing Centre (EPCC). This is, for us a significant milestone and I wanted to take this opportunity to review the past decade and a half, not only in terms of the changes we have seen as an organization, but also to reflect on the revolution that has taken place in the HPC arena more generally.
Although founded in 1990 as the University of Edinburgh's flagship for HPC and its application in computational science, our roots go back almost a decade earlier. In 1981 the Department of Physics launched an initiative to buy two ICL Distributed Array Processors and use these highly-parallel computers as a cost-effective alternative to vector computers for computational science. The success of the academic, and latterly, industrial research on these and successor machines proved that there were real opportunities from pulling this activity together into a coherent department - and EPCC was born.
Speaking as one of those who came out of the Department of Physics to found EPCC, those were heady days. Not only was parallel computing the "in" computing technology, but many more scientists were realising that computation was emerging as the third methodology of science to complement theory and experiment. This meant that new opportunities were emerging in exciting new fields - and that included industrial applications. Embracing the linkage between academia and industry from the outset set EPCC down a road which it has followed ever since. Encapsulated in the term 'win-win-win' we have sought to form alliances within and across projects so that the participants have obtained more for their investment than would have been possible otherwise. Without this entrepreneurial approach we would not have been able to build up our range of facilities, which are unmatched in any European university, nor would our 70 staff members be able to span such a wide range of activities from HPC facilities management, through industrial software development, European co-ordination, HPC training to research into HPC tools and techniques and academic computational science development.
More recently, the necessity of marrying the outputs of HPC research with experimental data has driven much grid research worldwide. EPCC, alongside its sister institute, the National e-Science Centre has taken a leadership role in developing the middleware to support such grid-based research and, then, to help academia and industry to build applications on top of this infrastructure. In recognition of the success of the OGSA-DAI project, one of the largest that EPCC has ever been involved with, we became a founding partner in the Globus Alliance two years ago. It was therefore a great satisfaction for me to hear a few days ago that our OGSA-DAI work will be funded for another three years.
Looking back can be dangerous as memory can be a distorting mirror, but I would like to pick out a few milestones along our path. The first of these was our collaboration with Thinking Machines which brought the Connection Machine to Edinburgh in 1991. This was the first time that a parallel computer in the UK out-performed the Cray vector supercomputers, then at RAL. Not only did this machine give a boost to Edinburgh researchers, but it raised the visibility of parallel computing and of EPCC on the national and international scene and led fairly directly to two other key activities that we have carried on ever since: European research, training and co-ordination; and, UK national HPC services.
The development of our industrial program into a slick machine bringing in clients from blue chip multi-nationals to local SME's has also been vital to our success. In a portfolio of projects with over 100 clients it is the unusual ones that stick out and I always think of our work on automated inspection of coated mushrooms, or monitoring the effectiveness of fishing nets - even if projects such as designing more effective wind turbines, or maximizing extraction from oil reservoirs may have had more widespread effect.
Although we have had a training activity for a decade, I was particularly pleased when a few years ago we started the UK's first MSc in HPC. This has been a real success, attracting many European and international students to the University each year. Recently expanded to include a linkage to computational PhD students from around the UK, this project is going from strength to strength and we see it as vital to maintain the UK's world-leading position in computational science research. Learning from the experiences of the past, and from other applications areas, is vital as the use of HPC in computational science breaks out of its traditional homelands of physics, chemistry and engineering into new areas such as biology, medicine and geology.
If the number of application domains has increased as HPC has become more mainstream, we have seen a corresponding decline in the range of technology options available. When in 1991 Greg Wilson and I edited a book on the HPC technology marketplace we had to be selective to keep the book to under 400 pages. Today, with the domination by the big computer companies, we would find it hard to produce a long pamphlet. Is this a bad thing? Provided that it does not stifle future progress, my answer would be no. Code portability between platforms is better than ever and the big companies have produced top-end machines at ever more affordable prices - something that would not have been possible without the benefits of scale. The emergence of new novel-architecture machines, such as the IBM Blue Gene, or the home-grown QCDOC or machines from niche providers, such as Clearspeed, shows that the HPC technology roadmap is an exciting one.
Every decade produces its own claims that "Moore's Law is just about to die" and, so far, every decade has been wrong. Conversely, it would be foolish to argue that single microprocessors can continue to deliver higher performance through more and more reductions in size. One clear result of this is to make the parallel computing paradigm, which we started with because of its cost-effectiveness, a fundamental technology for the future. Only using such techniques will enable us to overcome the physical limitations imposed on the design of ever-faster microprocessors. I believe that such pressures can only increase the applicability of our skills and resources in the years to come. Looking forward 15 years in an area which is changing as rapidly as leading-edge computing is dangerous. However, I see an exciting road ahead with new scientific problems and tools appearing to tackle them. EPCC has all of the skills, drive and ambition to take on those challenges and I feel privileged to lead such an organization. EPCC is not hardware and buildings, it is people; without the dedication of our staff, we would be nothing, but with today's highly-talented team we are looking forward to another bright 15 years.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?