January 06, 2011
In 2010, a number of cloud computing research and development initiatives were underway in several European countries, both in academia and enterprise contexts. As the new year dawns, Jose Luis Vazquez-Poletti, PhD and Assistant Professor in Computer Architecture at Universidad Complutense de Madrid, Spain provides an update on some specific movements in European cloud computing research projects that we will follow in 2011.
The last Christmas carol stopped playing a day or two ago, which reminds us that we are back to work. 2011 looks promising for cloud computing for HPC and beyond; just one look at Twitter during the last month is enough to convince us: zillions of tweets announcing which the trends on IT would be, and surprise… our favorite technology is always in.
Currently, our research group (DSA-Research.org at Universidad Complutense de Madrid, Spain) is involved in some European cloud computing projects which affect not only the different layers of the cloud service stack, but also interactions with other distributing computing technologies.
For those who are not aware, the DSA Research Group conducts research in the arena of advanced distributed computing and virtualization technologies for large-scale infrastructures and resource provisioning platforms. Much of the research seeks to address the various challenges of Infrastructure as a Service (IaaS) clouds.
Let the following paragraphs be a first-person view of where the European Union is putting its efforts in matters of cloud computing, as well as my two cents for giving you a perspective of what 2011 will be in terms of cloud computing research.
A Reservoir of Clouds
The first project won’t last too much longer as its conclusion is planned to be at the end of January, but its legacy will remain for the next years. I’m talking about the RESERVOIR project (REsources and SERvices Virtualisation withOut barriers, with the objective to work at IaaS level, providing a powerful Cloud-like ICT infrastructure for an effective and reliable delivery of services as utilities. The final goal is not only to create an infrastructure supporting the setup and deployment of services on demand at competitive costs across disparate administrative domains but to increase the competitiveness of the European Union economy itself.
This project received 17M Euro and its latest appearance was during the OGF30 at Brussels, where a keynote remarking the benefits of cloud computing and a training session on how to build and use the RESERVOIR infrastructure were given.
Clouds of Grids or Grids of Clouds?
The StratusLab project started in early 2010 with the clear objective of bringing several benefits to the e-Infrastructure ecosystem attacking several deficiencies that may be found in current computing sites. These benefits are provided in terms of simplification, added flexibility, increased maintainability, quality, energy efficiency and resilience of the sites.
The toolkit provided by StratusLab is a perfect example of how the cloud should be transparent to the rest of computing paradigms and the final user. Basically, it complements the existing grid middleware services, that continue to provide the glue to federate the distributed resources and the services for high-level job and data management. About one month ago, the first stable version of the toolkit, aiming grid and cluster computing, was released so I really encourage you to give it a test.
“Network is the Computer”
The following project can be easily introduced using Scott McNealy’s most famous quote. Its name is BonFIRE (Building service testbeds for Future Internet Research and Experimentation and started last summer. The idea behind BonFIRE is to provide researchers access to an experimental facility which enables large scale experimentation of their systems and applications, aiming the evaluation of crosscutting effects of converged service and network infrastructures. These systems target the Internet of Services community within the Future Internet and will be supported by a multi-site cloud facility developed within the project.
Currently, BonFIRE is offering up to 200K Euro funding contribution (total 1,34M Euro) for innovative service and network experiments that use its facility. The funding will be allocated through a series of open calls and, if you plan to apply, please be aware that the first open call will be published on 19 January 2011 and will close on 2 March 2011.
Last but not least, the 4CaaSt project aims at the PaaS layer. With a funding of 15M Euro, its objective is to create an advanced PaaS cloud platform wich will support the elastic and optimized hosting of Internet-scale and multi-tier applications. Like the previous project, 4CaaSt started last summer and its first General Assembly took place in Athens one month ago.
Being involved more directly in this project, I have to say that building infrastructures at this level is very challenging due to the heterogeneity of tasks. A great orchestration is needed between the different layers (IaaS, NaaS, XaaS, …) and the partners, which pertain to both Academia and Industry. Nevertheless, the first scheduled demo is on the way, so stay tuned for more news about this infrastructure that will embed all the necessary features, easing programming of rich applications and enabling the creation of a true business ecosystem where applications coming from different providers can be tailored to different users, mashed up and traded together.
There are a number of other notable projects going on in Europe that involve HPC clouds, including the most popularized example, CERN. Furthermore, several conferences have appeared to support the community, including ISC Cloud, which is run by the same group that presents us with the International Supercomputing Conference each year.
This article does present an overview of how 2011 looks like “cloudily speaking” from this side. We have accepted many challenges regarding different layers of the cloud architecture and this is very interesting from my point of view, as we will be always aware of how the big picture looks like. Also, one project’s knowhow can be extended to the rest due to the similarities of certain components.
Well, that’s all for now--I hope you found this project compendium interesting and gave you an idea on how clouds are moving here. Please stay tuned to some of our select research news from Europe by going to http://dsa-research.org/
I wish you all the best cloud computing for this 2011!
About the Author
Dr. Jose Luis Vazquez-Poletti is Assistant Professor in Computer Architecture at Universidad Complutense de Madrid (Spain), and a Cloud Computing Researcher. He is (and has been) directly involved in EU funded projects, such as EGEE (Grid Computing) and 4CaaSt (PaaS Cloud), as well as many Spanish national initiatives. His interests lie mainly in how the Cloud benefits real life applications, specially those pertaining to the High Performance Computing domain.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?