October 18, 2013
Formula1 racing is all about saving time. The team that competes the specified number of laps with the best time goes home with the trophy. Nowadays, winning on the racetrack begins in the datacenter.
To build the ultimate race car, every single part must be optimized for performance. Improvements in design may translate into tenths of a second being shaved from lap times, but those incremental gains add up. As a science, racing is all about manipulating aerodynamics to reduce drag. Back when supercomputers were the sole purview of government labs, this design work was done using physical prototypes and wind tunnel testing. It was a time-consuming and expensive endeavor. Now a great percentage of this work is done using high-performance computer clusters – it's a faster and more precise discipline.
The Caterham F1 Team relies on a Dell HPC cluster to design its race cars. Thanks to recent enhancements, the cluster at the Leafield Technology Centre in Oxfordshire, England, can perform 10 billion calculations in about 12 hours, according to William Morrison, the Caterham IT infrastructure manager. The average home computer would take four to five months to do a similar amount of math.
The common practice today is for F1 teams to pursue computational fluid dynamics (CFD) in tandem with wind tunnel work. CFD relies on a technique called meshing, which breaks up the car's surface area into a grid of smaller spaces – creating a virtual model of the car. The computer's "wind tunnel" simulator will provide engineers with detailed information, including temperature, pressure, turbulence and velocity, for each unit in the mesh.
The computerized testing process is a tremendous time-saver, allowing the team's engineers to try out hundreds or thousands of virtual parts without having to physically build a thing. Only the best candidates will be manufactured into actual parts.
The Caterham F1 engineering team recently figured out how to decrease the time of a typical job from about 17 hours to 12 hours using some enhancements to the model and the solving approach. Shaving those five hours took a few months of work of stripping down the model and rebuilding it. "We have a little group of about three people dedicated to improving this all the time," notes Morrison.
Each job generates about 800 million pieces of individual data, which through further processing are transformed into a dataset that includes approximately eight videos, several graphs and a couple hundred pictures.
While there was originally some skepticism against virtual prototyping, the results of HPC won over converts. The digital design process is now central to F1 racing, and that's definitely the case with the Caterham F1 Team operation. Without their Dell HPC cluster, design and development would come to a halt. The team's engineers run their HPC cluster 24 hours a day, 365 days a year, and there's always 10 and 20 jobs awaiting processing.
There are safety mechanisms and power backups in place to prevent downtime. The HPC system is monitored twice a day and individual nodes can be taken offline without affecting other jobs.
For the Caterham F1 team, their HPC system is too important to let something happen to it. "It's not allowed any time off," jokes Morrison.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?