October 21, 2013
One of the most important tasks of anyone involved in HPC right now (or supercomputing or big data processing or advanced research computing or whatever – let’s not get fussy about names) is to be able to explain just what HPC is to others.
“Others” could be a plethora of different people: politicians, Joe Public, graduates possibly interested in HPC, industry managers trying to see how HPC fits into their IT or R&D programs, or just their family asking for the umpteenth time “what exactly do you do?”
It seems there are a handful of analogies that get frequently get called into use by HPC types when trying to explain HPC or one of its concepts. Let’s take a look at these, which aspects of each work well and which are flops (oh, who said puns are too geeky?).
The testosterone favorite: Formula 1
This analogy is often used to explain how HPC relates to “normal” IT. Normal IT is your family car (apparently Americans call this an “automobile”). It gets you from A to B (and indeed lots of other places on the map, providing your satnav is playing nicely). It uses commodity components.
Most adults can learn to drive it (although the quality of some people’s driving is suspect). HPC is like Formula 1 (a proper motor racing sport, not sure about NASCAR-lets-drive-in-circles-for-ages). It allows a higher budget to achieve the highest performance within a set of constraints (rules for F1, power etc. for HPC). It uses specialized components. Few adults can learn to drive F1 cars effectively. Few will get a chance to drive an F1 car. The relationship between F1/car and HPC/IT is often compared too – F1 (HPC) is at the leading of motoring (computing) technology, and successful technologies trickle down to mass production use in family cars (common IT). However, this analogy focuses on the technology and fails to relate the purpose or benefit of HPC.
It also perpetuates a picture of a niche activity relevant only to a few – which is part of the perception issue that HPC needs to break free of. Overall, I have personally avoided this analogy for these reasons.
The simple yet powerful: A spade
Need to dig a hole? Use the right tool for the job – a spade. Need to dig a bigger hole, or a hole through tougher material like concrete? Use a more powerful tool – a mechanical digger (Cat, JCB, …).
Now instead of digging a hole, consider modeling and simulation. If the model/simulation is too big or too complex – use the more powerful tool: i.e. HPC. It’s nice and simple – HPC is a more powerful tool that can tackle more complex or bigger models/simulations than ordinary computers. There are some great derived analogies too. You should be able to give a spade to almost anyone and they should be able to dig a hole without too much further instruction. But, hand a novice the keys to a mechanical digger, and it is unlikely they will be able to effectively operate the machine without either training or a lot of on the job learning.
Likewise, HPC requires training to be able to use the more powerful tool effectively. Buying mechanical diggers is also requires expertise that buying a spade doesn’t. And so on. It neatly focuses on the purpose and benefit of HPC rather than the technology itself. If you’ve heard any of my talks recently you will know this is one of my favorite HPC analogies at the moment.
The moral high ground: A science/engineering instrument
I’ve occasionally accused the HPC community of being riddled with hypocrites – we make a show of “the science is what matters” and then proceed to focus the rest of the discussion on the hardware (and, if feeling pious or guilty, we mention “but software really matters”). However, there is a critical truth to this – the scientific (or engineering) capability is what matters when considering HPC. I regularly use this perspective, often very firmly, myself: a supercomputer is NOT a computer – it is a major scientific instrument that just happens to be built using computer technology.
Just because it is built from most of the same components as commodity servers does not mean that modes of usage, operating skills, user expectations, etc. should be the same. This helps to put HPC into the right context in the listeners mind – compare it to a major telescope, a wind tunnel, or even LHC@CERN. Again, the derived analogies are effective too – expertise in the technology itself is required, not just the science using the instrument. Sure, the skills overlap but they are distinct and equally important. Again, this analogy focuses on the purpose and benefit of HPC, but also includes a reference to it being based on a big computer.
Stay tuned later this week for more analogies…
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?