August 27, 2008
Using computer simulations to design new products has become standard operating procedure at many engineering firms today. Aerospace companies, automakers, and consumer goods manufacturers have been employing HPC for some time. Not everyone is on board though, as the Council on Competitiveness keeps reminding us. But even if every product engineer isn't using HPC in the traditional sense, almost all make use of technical computing on the desktop, either as an end unto itself or as a prelude to larger scale simulations on an honest-to-God supercomputer.
In fact though, there's a false dichotomy between desktop-based HPC and server-based HPC from the widget-maker's point of view. Engineers just want to run their favorite CFD software and get the results back as quickly as possible. Given a choice, though, most would prefer the luxury of a personal workstation versus sharing a cluster with others. The good news here is that desktop systems are becoming much more powerful, not only because CPUs are getting faster, but also because GPUs and Cell processors can now be exploited as floating point accelerators.
Even a high-end PC -- the one your teenage son is using -- has a teraflop of performance under the hood. Of course, tapping into that performance for general-purpose computing is still a work in progress. But with software frameworks such as CUDA (for GPUs), Intel's Threading Building Blocks (for multicore CPUs), and RapidMind's Platform (for both) now available, ISVs have a choice of tools to bring teraflop computing to their desktop customers. In fact, for both software vendors and users, the path to shared memory parallelism on the desktop may be an easier transition and more economical than the path to distributed memory parallelism on HPC clusters.
Of course, if you're Boeing you don't have much choice; you're going to need some big iron to do those wind tunnel simulations for your aircraft designs. I think it's safe to say that firms doing cutting-edge engineering will require cutting-edge computing. But for component-makers who need something less than a digital wind tunnel, a teraflop of compute power may be plenty. Keep in mind that 10 years ago, the top supercomputer in the world was a 1 teraflop system.
The real question is this: What's the market for desktop HPC versus server-based HPC for product engineering? That's a tough one to answer since both applications and computing performance are moving targets. I suspect computing performance is moving faster than the software, if only because it's much harder for ISVs to modify their code than for OEMs to build faster machines. In fact, the software vendors would love to get their simulation tools in a framework that automatically scaled with the underlying hardware. But since multicore CPUs and coprocessor acceleration are still relatively new, the ISVs have yet to catch up.
Certainly there is room for more capability in the current crop of engineering design and visualization tools. Despite advances in the power and sophistication of this software, the final step in the design process is almost always a physical mockup and test. Even Boeing and the Formula One automakers still use wind tunnels -- they just need less of them than they used to.
In the latest issue of Product Design & Development, some ink is devoted to the topic of simulation software versus physical testing. The consensus is that simulation, while critical, only takes you so far.
Mike Rainone, co-founder of PCDworks puts in his two cents on the topic, asking: "Why in the world did we spend bazillions of dollars on these (software) programs, if you are going out to the shop to build the thing out of foam?" Even while recognizing that simulation has become an indispensable tool to the designer's arsenal, Rainone says he's not about to tear down the shop. "Regardless of the veracity of the model, most systems defy true 'understanding' until you get physical," he writes; "until you can put it in your hands, turn it inside out, and make it work to see the interdependencies of the parts in action."
For that you are going to need a holodeck, or something very much like it. Fully-immersive simulations might seem like science fiction today, but Intel, AMD and NVIDIA have been talking up "visual computing" as the next frontier, so these virtual reality applications (perhaps minus the tactile feedback) are definitely in the cards for the post-2010 world. Product designers may never shut down the shop completely, but I imagine they are going love the holodeck. And by the way, so will your teenage son.
Posted by Michael Feldman - August 26, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?