October 06, 2010
This week a tech publication put together a short but descriptive list of some notable HPC cloud scientific research efforts, including NASA’s Nebula cloud and the use Amazon’s EC2 cloud platform to power the Belle Project as an augmentation to the initiative’s own grid infrastructure.
The Belle project, which won the Nobel Prize for Physics in 2008 began with a group of Japanese researchers at the KEK High Energy Accelerator Research Organization who were investigating large matter-antimatter symmetry. The new and improved “Belle II” effort will be targeting the reason why there is asemmetry but this, of course, requires far more computational horsepower.
Dr. Martin Servoir from the University of Melbourne in Australia who is one of the 343 researchers from around the world working on the Belle II project provided some insights about their use of the Amazon cloud to Aaron Tan, stating that generating the data required in this new phase of the Belle project required 50 times more computing power than what the research effort was able to secure, hence the move to Amazon’s resource.
While Amazon’s cloud offering would certainly not be a perfect fit for several other HPC applications, even if the datasets were smaller in size, this is one example that demonstrates the decision to avoid upfront hardware investments in favor of a pay-as-you-go model that can scale as research demands ebb and flow. Although researchers may be taking a performance hit, this is a “problem” of the universe they’re solving so is time really of the essence when you think on that grand scale?
Most readers who stop by frequently are aware of many specific HPC cloud projects pending in the scientific computing space, but for those who start to lack faith and wonder if there is a fit for HPC in a cloud (or even if virtualized HPC destroys the term “high-performance computing” in principle) it’s worth reviewing a number of current case studies where scientists are tackling big problems on resources that are essentially as vast as one can provision via credit card versus cluster investment. While this type of research does not require the high-speed, high-performance boost of a mission-critical application, if successful over a sustained period of time it could further enhance the case for the public cloud.
On that note, Tan does go into depth about the security issue and how the researchers have addressed it—good reading for more in-depth information about a number of increasingly common workarounds and solutions researchers resort to in the public cloud.
Full story at TechGoondu
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?