The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Running Computational Fluid Dynamics in the Cloud


There are several limitations to performing HPC in a public cloud, a few specific to computational fluid dynamics (CFD). An intensive CFD application will, like other parallel scientific computing applications, have to map out pieces of itself in parallel and report back to the source several times before synthesizing.

When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.

For CPUs in Amazon’s EC2 cluster, they found that the application running on 8 CPU cores had an efficiency of 70 percent relative to a non-virtualized HPC cluster. “Beyond that limit, we run into network interconnect bandwidth problems if we do not reserve more instances. After an explicit request for more CPU compute instances, we have seen even for up to 256 CPU cores / 32 instances an acceptable parallel efficiency of more than 50 percent.”

With regard to the GPUs, they found similar acceptable levels at 8 GPU cores but the performance petered out with further scaling. “Overall, we expect an acceptable scaling on more than 128 CPU cores or more than 8 GPUs if we pre-request an appropriate number of instances and avoid ECC in the case of GPUs.”

In short, according to the researchers,“we believe that Amazon’s HPC cloud is well prepared for moderately sized parallel CFD problems on up to 64 CPU cores or 8 GPUs.” That bodes well for the future of mid-level scientific HPC performed in the cloud.

Most Read Features

Most Read Around the Web

Most Read This Just In

Most Read Blogs


Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?

Newsletters

Stay informed! Subscribe to HPCwire email Newsletters.

HPCwire Weekly Update
HPC in the Cloud Update
Digital Manufacturing Report
Datanami
HPCwire Conferences & Events
Job Bank
HPCwire Product Showcases


Xyratex

HPC Job Bank


Featured Events


HPCwire Events