June 12, 2013
Cloud computing is gaining ground in utilization by mid-sized institutions who are looking to expand their experimental high performance computing resources. Those institutions, may be in use of a guidebook to help them get started and maximize their cloud-based HPC facilities.
As such, IBM released what they call Redbooks, in part to assist institutions’ movement of high performance computing applications to the cloud. Ceron discussed four different reference architectures: engineering, financial, business, and life sciences.
“Inside this book we cover some reference scenarios for a given amount of industries, or reference software architectures,” said Rodrigo Ceron Ferreira de Castro of IBM Brazil’s Lab Services. To start off the demonstrations, below is the diagram from the book discussing the reference architecture for engineering applications.
Like all four architectures to be discussed, the diagram is supposed to serve as a reference point to serve engineering, which commonly uses cloud-based HPC systems for structural analysis and experimentation. Here, that means, after going through the requisite security level, employing several cloud-based Platforms-as-a-Service, including workload management and virtualization.
For a few of these, including the business analytics infrastructure, naturally involve IBM systems as part of the reference infrastructure. Here, according to the diagram below, that means putting the HDFS under the BigInsights cluster umbrella. Possibly most important of those systems is IBM’s Symphony, which as shown in the financial reference infrastructure, lies across the workload optimizers to improve runtime.
IBM has been pushing their Symphony platform as a workload manager for cloud-based HPC applications in general, and has found some traction, including in the ballyhooed life sciences department.
Genomics, with its high degree of necessary experimentation and the need of scientists to share data as seamlessly as possible, has received a significant amount of attention with regard to HPC cloud. As such, its architecture requires several layers. One will notice an IBM implementation or two spread across the infrastructure, with Watson handling the nitty-gritty genetic analytics and Vivisimo handling the vast search requirements of a system parsing through billions of genes.
The Redbooks aim to provide guides to new users of cloud technology in setting up high-performance applications. IBM hopes that it can increase accessibility to these technologies and perhaps nudge a few of those new users toward their cloud products, which is not a bad strategy.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?