April 12, 2010
When Tom Tabor invited me to join HPC in the Cloud I saw, on the one hand, a great opportunity to contribute my undying optimism about clouds becoming mainstream (one day) and providing on-demand, scalable computing for everyone via the Internet. On the other hand, this is a great opportunity to discuss the many roadblocks our community still has to remove before clouds can become mainstream.
Then I remembered our community’s many years of effort in working hard to make grids mainstream. Looking back, I see that the roadblocks we faced for grids were indeed very similar to the ones we face with clouds today. To be honest, sometimes I think that we barely made progress towards our grid mainstream dream, at least in terms of resolving issues like security, user-friendliness, the wider acceptance of the grid paradigm, the disparate methods handling different administrative domains, releasing control, achieving reliability, and coming up with the right (set of) standards for interoperability. Sure, if we look closer, the research community has resolved many of these issues to the point that we now see research grids today in production around the world (i.e. DEISA, the Distributed European Infrastructure for Supercomputing Applications, with its UNICORE middleware, and EGEE, Grid5000, TeraGrid and its successor eXtreme Digital (XD), and Naregi, and hundreds of application specific Grids like NEESgrid, BIRN, GEONgrid, LEAD, nanoHUB, ServoGrid, GeneGrid, and MyGrid). Still, it is worth noting that most of these are special purpose research grids.
To make a long story short, for HPC in the Cloud, my primary task does not lie in simply extolling the many benefits of clouds (which are quite obvious to many already), rather I see my main task to be examining the nature of the many roadblocks to mainstream cloud adoption. The best place to start is to search for success stories from you, our fellow community members, that demonstrate how you are clearing these hurdles. A word of caution, however--if you think this task of mine (and of yours) is an easy one, just think of the following issues:
- The process of retrieving data from one cloud and move them into another cloud, and back to your desktop system, in a reliable and secure way.
- The fulfilment of (e.g. government) requirements for security, privacy, data protection, and the archiving risks associated with the cloud.
- The compliance with existing legal and regulatory frameworks and current policies (established far before the digital age) that impose antiquated (and sometimes even conflicting) rules about how to correctly deal with information and knowledge.
- The process of setting up a service level agreement.
- Migrating your applications from their existing environments into the cloud.
And for that matter…
- Do we all agree on the same security requirements; do we need a checklist, or do we need a federated security framework?
- Do our existing identity, access management, audit and monitoring strategies still hold for the clouds?
- What cloud deployment model would you have to choose: private, public, hybrid, or federated cloud?
- How much does the virtualization layer of the cloud affect application performance (i.e. trade-off between abstraction versus control)?
- How will clouds affect performance of high-throughput versus high-performance computing applications?
- What type of application needs what execution model to provide useful abstractions in the cloud, such as for data partitioning, data streaming, and parameter sweep algorithms?
- How do we handle large scientific workflows for complex applications that may be deployed as a set of virtual machines, virtual storage and virtual networks to support different functional components?
- What are common best practices and standards needed to achieve portability and interoperability for cloud applications and environments ?
- And last but not least, how can (and will) organizations like DMTF and OGF help us with our cloud standardization requirements?
So now you can see that all these roadblocks (and the process of barrier removal) will keep us very busy over the next couple of years. The end result of all of this hard work? The realization of the mainstream dream for clouds.
Wolfgang Gentzsch is Advisor to the EU project DEISA, the Distributed European Infrastructure for Supercomputing Applications, a member of the Board of Directors of the Open Grid Forum, and a contributing editor to HPC in the Cloud. Read more thoughts from Wolfgang Gentzsch on the topic of the grid and clouds here.
Posted by Wolfgang Gentzsch - April 11, 2010 @ 9:46 PM, Pacific Daylight Time
Independent HPC consultant for cluster, grid, and cloud computing, and for data and compute-intensive applications, and General Chair of the ISC Cloud Conference.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?