April 11, 2013
The Rockhopper Penguin on Demand Cluster made its debut November 2011, a collaborative effort to deliver cluster computing as a service in a secure environment. Led by Penguin Computing and Indiana University, the project also included founding user-partners, the University of Virginia, the University of California Berkeley, and the University of Michigan. The cluster has been in production for over a year now, long enough for additional details to emerge on this interesting HPC cloud use case.
Presumably named after a rad-looking species of Penguin, Rockhopper is a bonafide HPC cluster, meaning it is not virtualized. The architecture consists of 11 Penguin Computing Altus 1804 servers each with four 12-core AMD Opteron 6172 processors and 128 GB RAM. The system is located in Indiana University's Data Center facility, and managed by Penguin on Demand.
As explained in a presentation delivered at the International Plant and Animal Genome Conference in San Diego earlier this year by IU's Barbara Hallock, Rockhopper's services are provided by both the National Center for Genome Analysis Support and the partnership between Indiana University and Penguin Computing. Users benefit from the availability of on-demand cycles on a real HPC cluster at a lower price point than less-performant virtualized offerings, such as Amazon's EC2 cloud.
Penguin lists the fees as follows:
|Core Hour||$0.09 per core hour|
|On-Demand Storage||$0.10 per average GB-month (monitored daily)|
|Data Transfer from Disk||$20 per transfer|
In case you're wondering where to sign up, you should know that the cluster is only available to researchers at US institutions of higher education (with .edu domain names) or Federally Funded Research and Development Centers (FFRDCs).
Rockhopper was conceived with two main purposes in mind. One, as resource for XSEDE-allocated projects that had used up their award but still required additional computational work, and two, as a launchpad for projects that could potentially scale up to XSEDE in the future. For these reasons, Rockhopper was designed with an "XSEDE-standardized interface" to enable researchers to spend less time on compute and more time engaged in core science tasks.
The system supports a wide assortment of applications including mesoscale atmospheric prediction, genomics, quantum chemistry, and molecular dynamics. Specific applications, by package name, include COAMPS, GAMESS, Galaxy, GROMACS, HMMER, NAMD, OpenFoam, OpenMPI, WRF, and many more, as well as developer tools from Intel and the Portland Group.
Additional details about Rockhopper and on other POD offerings can be accessed in this presentation from SC12.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?