October 08, 2013
Oct. 8 -- As the 2013 Nobel Prize for Physics goes to Professor Peter Higgs, along with François Englert, for their prediction of the Higgs in the 1960’s, computational scientists from across the world will no doubt be celebrating their role in the discovery of the Higgs Boson, which confirmed his theory and helped the UK win its 120th Nobel Prize.
When Professor Higgs wrote his theory in the 1960s predicting the existence of the Higgs Boson as an explanation of why particular particles had mass, he would have used nothing more than a notebook and blackboard. But little did he know at the time that the hugely sought after particle he predicted would be discovered in 2012 by the world’s biggest science experiment, the Large Hadron Collider (LHC), marking a significant breakthrough in our understanding of the fundamental laws that govern the Universe.
100m below ground the LHC’s particles travel at 99.9999991% of the speed of light. They circulate the ring 11245 times per second and collide 600 million times per second, generating 1MB of data with each collision. Equating to 600 million MB per second it would be impossible to even start to analyse this incredible amount of data with just chalk and a blackboard.
Such an amount of data requires a massive amount of supercomputing and the UK has made a significant contribution to this, with both its supercomputing expertise and facilities that make sense of the data generated by the LHC, which eventually led to the discovery of the Higgs Boson.
In order to be able to crunch the large quantities of data produced every year by the LHC, CERN turned to Grid computing in 2002 with the creation of the Worldwide LHC Computing Grid (WLCG) – an impressive network of computer centres around the world that would provide resources to manage data from the LHC. The WLCG analyses 15 million gigabytes every year to determine whether the experiments at the LHC are showing anything of particular significance. The Grid runs more than one million jobs per day, allowing over 8000 physicists near real-time access to LHC data. To be as efficient as possible the Grid is arranged in Tiers, starting with Tier 0 at CERN.
The Science and Technology Facilities Council’s (STFC) Rutherford Appleton Laboratory is home the UK’s Tier 1 supercomputer, one of only 11 in the world large enough to store the LHC data. Managed by the GridPP project, the UK arm of the WLCG, it has optical fibre connections with CERN to ensure that the data is transferred quickly, it performs large-scale data processing and stores the data derived that, before distributing data to Tier 2 centres, which are hosted by universities and institutions across the UK, followed Tier 3 which comprises researchers and university clusters.
The university of Glasgow’s Professor David Britton, who leads the GridPP project said, "This is a triumph for international collaboration between particle physics theorists and experimentalists, engineers of many disciplines, and computer scientists from across the globe who have come together to build the greatest scientific experiment in history. This collaboration has been united by the common goal inspired by the work of Peter Higgs and François Englert half a century ago.
Professor Adrian Wander, Head of STFC’s Scientific Computing Department said, “This is an extremely proud day for Professor Higgs and indeed the UK. The UK is a global leader in computing technology and expertise, and this has played a substantial role in the discoveries that have led to the UK winning its 120th Nobel Prize today. This is certainly a day for celebration, not just for the UK but for physicists across the world, and I am very excited to say that the UK’s computational science community will, without a doubt, be joining in the celebrations.”
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?