September 29, 2010
If we're to listen to all the hype and buzz about cloud computing it seems that everything is covered—that we do not need to worry or even think about anything. It's just going to happen in the cloud, right?
It is hard to ignore the barrage through email on 10-ways to fix this cloud issue or 8-ways to solve that cloud issue or 7 deadly sins on cloud security. One area that is simply not discussed is how do you write next generation applications that can exploit the underlying server architecture in a manner that is simple, intuitive and sustainable.
Today's cloud platforms are nothing more than racks of multicore x86 processors, network switches and storage. But how do you program them so that you can get multiple cores, processors, or even racks of servers to crunch the application and utilize the underlying parallelism?
Parallel processing, specifically large scale or massively parallel processing has been one of those grand challenges from a programmer's perspective. There is no question that multicore, multicluster architectures are here to stay, and compared to the mid-eighties, very affordable.
In the eighties, there were a couple of architectures that vied for the performance king, Massively Parallel Processing (MPP), Vector architectures, and symmetric Multi Processor (SMP). The winning architecture was SMP. The main reason, in my mind, is that SMP offered an easily programmable environment compared to other architectures of that time. Compilers were developed that offered a way for programmers to just write code and the compiler would take care of the rest, now it is true that the more you knew about the dataset and its characteristics the better the outcome. Nevertheless, it was far simpler than trying to decompose the application and figure out how to get the application to run on 100 or 1000 processors. The downside of SMP is that it ran out of steam for most applications at about 12 processors whereas MPP could go big, really big. The problem with MPP, on the other hand, was that you had to re-engineer the software every time the underlying hardware changed.
Over these past twenty-five years not much changed. Yes, there have been advances in tools, MPI, AVM, etc. but you still have to roll up the sleeves and mess with the code and there is nothing worse than trying to get old code written by someone else to work on MPP architect. Where's the documentation?
Today's x86 architecture is the foundation, the building block for the next generation software development for cloud computing. Imagine a software development environment specifically for cloud computing that would automatically decompose the problem, create the code, document the code, create the process to solve the problem and create the control or automation of the process. The code would run on single core or exploit the underlying, multi-core and multi-server architecture of the available cloud resources without any manual intervention.
Add to this development environment an ecosystem that would track the software that had been developed, who and where it was being used, and issue a license payment without the developer even being aware when the code was being used.
This is similar to the music business today; a song is played on radio stations across the country, the artist knows nothing about this, yet the value chain is compensated for in its air time. That's what I am talking about here.
This week Massively Parallel Technologies announced Blue Cheetah, an application development ecosystem for cloud computing. This is the industry's first application ecosystem software for cloud computing environments. The Blue Cheetah application ecosystem is suitable for a wide variety of cloud computing applications such as massive multi-player gaming, numerically intensive application, or even business analytics.
"The Blue Cheetah application ecosystem is first to provide a single environment for both creation and monetization of highly optimized modular applications," said Bobbi Hazard, CEO of MPT. "Cloud computing and multi-core processors provide an immense potential for high performance and operational efficiency, but they create new problems for application development and commerce. MPT solves these problems with a new holistic solution that goes well beyond existing products."
Key to this development environment is the ability to monetize the development, enter iCode or the iApp store for cloud computing.
MPT has a rich pedigree of developers resulting in a large IP portfolio. In addition, they have surrounded themselves with rich talent including such industry luminaries as board members: John Gustafson, PH.D, Member of the Board of Directors is a noted luminary in the High Performance Computing market. Perhaps their biggest luminary is Dr. Gene Amdahl, Member of the Scientific Board of Advisors.
One of the icons of the computing industry, Dr. Amdahl known for Amdahl's law, is a founder of four companies, and one the original architects of the business mainframe computer. Gene was featured in the company's launch event.
Finally, it's clear that someone is paying attention the developer community and taking a critical look at how to transition to cloud computing solutions. MPT draws from their rich technological foundation to significantly improve developer productivity and enhance user experience.
Finally, MPT takes this development ecosystem to a higher level by offering cloud-computing services for the developer. A one stop shop for development testing, managing and monetizing software. Nirvana.
Posted by Steve Campbell - September 29, 2010 @ 11:35 AM, Pacific Daylight Time
An HPC industry consultant and cloud evangelist, Steve Campbell is a seasoned senior HPC executive.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?