The Portland Group
Oakridge Top Right
HPCwire

Since 1986 - Covering the Fastest Computers
in the World and the People Who Run Them

Language Flags

Visit additional Tabor Communication Publications

Enterprise Tech
Datanami
HPCwire Japan

Sneak Peek into The SCinet Sandbox


This week during the R Systems HPC 360 conference sponsored by HPC on-demand provider R ystems in Champaign-Urbana, Illinois I spent some time speaking with Al Stutz, CTO of Avatec, a Springfield, Ohio-based non-profit modeling and simulation research organization with a current emphasis on the aerospace sector. Avatec is involved with an ongoing project that is examining ways the military can reduce the cost and development time for jet turbine engines now, which meshes Avatec's broader aims to explore potential solutions that will improve HPC performance for companies reliant on simulation and modeling.

Avatec is taking part in the DICE program’s sandbox project at SC10 in New Orleans this year along with several national labs and companies who want to demonstrate the potential of using geographically distributed Infiniband clusters in a common Infiniband mesh so clusters can interoperate to share message and send data back and forth. One of the key initial findings the cooperative wants to prove is that when using Obsidian Longbow products with full encryption, there is no performance impact.

Below is part of a brief chat I had with Al Stutz about the sandbox and what it could mean for users looking for large resources to tackle major problems.

What this could mean is that clusters can be brought together to tackle extraordinarily large problems without the cost of a giant super-number processor system. Stutz claims that “by interconnecting systems across the country with this product from Obsidian, you can extend your Infiniband cluster dramatically by linking together these geographically distributed clusters, scheduling them together and sharing vast resources when you have particularly large problems to address.”

The SCinet “sandbox” demo that will be held at SC by members from NASA Goddard, Lawrence Livermore National Lab and others as they test a WAN file system and transfers using Obsidian ES Encryptors on 10 GbE links from scattered sites and directly to Booth #1149.

Posted by Nicole Hemsoth - October 04, 2010 @ 10:21 AM, Pacific Daylight Time

Discussion

There are 0 discussion items posted.

Join the Discussion

Nicole Hemsoth

Nicole Hemsoth

Nicole Hemsoth is the managing editor of HPC in the Cloud and will discuss a range of overarching issues related to HPC-specific cloud topics in posts.

More Nicole Hemsoth


Recent Comments

No Recent Blog Comments

Sponsored Whitepapers

Breaking I/O Bottlenecks

10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.

A New Ultra-Dense Hyper-Scale x86 Server Design

10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.

Sponsored Multimedia

Xyratex, presents ClusterStor at the Vendor Showdown at ISC13

Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.

HPCwire Live! Atlanta's Big Data Kick Off Week Meets HPC

Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?