October 04, 2010
Last week at the R Systems-sponsored HPC 360 event in Champaign-Urbana, Illinois, the focus was on the manufacturing sector with an expected emphasis on the value of modeling and simulation to drive competitiveness and growth. A secondary focus questioned how simulation-centered companies can look to utility or on-demand solutions to extend their ability to make best use of computational resources and improve efficiency.
While there were a number of manufacturing companies present, only a few were actually making use of virtualized or on-demand resources although there were several weighing their options. Among the host of attendees in the “investigative” category was Matt Dunbar, chief software architect for SIMULIA, the simulation brand for Dassault Systemes, which produces the finite analysis product suite Abaqus.
Software research and development arms like SIMULIA require a vast amount of computational resources to further enhance their product line but what happens when a company like Dassault Systemes runs out of power and cooling capacity and furthermore leaves developers waiting in long queue lines? And what happens when the on-site resources cannot deliver the 24/7 capability needed without requiring architects to have long wait times as their projects remain on hold?
Software architects eager to move forward with software research and development have to make a tough decision between either waiting in a long queue for post-processing in particular or, in turn, need to consider the viability of sending at least some workloads off-site.
As Dunbar stated, “doing actual batch simulation in the cloud is reasonably straightforwared but doing 3D graphics post-processing is something that remains a question mark for us. There are a number of ways we can do that, but right now we’re trying to decide how best to do that.” This is a difficult decision because software architects are either faced with waiting for a long time or taking what might be a performance hit with their use of utility resources versus their own, slightly more time-intensive (due to wait time) use of workstations.
Dunbar gave an overview presentation at the HPC 360 conference in which he discussed some of the challenges the company is facing as it ponders the decision to move post-processing into the cloud due to increasing restrictions and spent a few moments discussing some of his key points with us.
In Matt Dunbar’s view, “you have to come up with performance that’s equivalent to the workstation or come up with a way to handle post-processing” which echoes the sentiments of a number of other companies reliant on 3D processing to drive growth and further development.
Posted by Nicole Hemsoth - October 04, 2010 @ 7:37 AM, Pacific Daylight Time
Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?