November 30, 2007
Nov. 29 -- A group of end users and computer scientists from universities, national laboratories, and industry has restarted the MPI Forum to update the decade-old Message Passing Interface (MPI) standard for highly parallel computing.
The group got together at the recent SC07 meeting in Reno, Nev., and will meet again Jan. 14-16 in Chicago. It will continue meeting every eight weeks over the next two years to create versions 2.1, 2.2, and 3.0 for the standard, which handles data transfer and dynamic-process control for parallel computers.
The MPI standard is the ubiquitous application programming interface (API) used in parallel simulations, with the underlying implementations being the enabling technology for running parallel applications. Standards-compliant MPI implementations provide an easy-to-use mechanism for users to exchange data between processes in a parallel job, as well as mechanisms for changing the number of processes used by a single job.
The effort is being coordinated by Rich Graham of Oak Ridge National Laboratory's National Center for Computational Sciences. According to Graham, the group expects to complete the three-step upgrade process by 2010, voting in any agreed-upon corrections and changes as the process proceeds.
The first step will be MPI 2.1, which provides a simple clarification of the current MPI 2.0 standard and corrections to the MPI 2.0 document, with no API changes. The goal is to complete these changes by mid-year 2008. The second step -- perhaps called MPI 2.2 -- should be completed in early 2009 and will address clear errors and omissions in the standard. The third and most ambitious step -- MPI 3.0 -- will involve a more thorough rethinking of the standard to effectively support current and future applications. This effort could involve larger changes to the standard. Issues that have already been raised include improved one-sided communications as well as support for generalized requests, non-blocking collectives, new language support, and fault-tolerance. Forum members hope to complete this phase by early 2010, with changes to the standard being voted on annually.
Graham said the group is strongly encouraging anyone who relies on MPI to get involved in the process. This includes end-users, hardware and software vendors, researchers, and MPI implementers.
"MPI has been extremely successful in enabling advances in simulation over the past decade and will continue to play a key role in this arena," Graham said. "However, with a large body of hands-on experience and a rapidly changing computing ecosystem, it is time to take a look at adjusting the standard to meet this ever-changing environment."
For more information and to get involved, please visit the MPI Forum Web site (www.mpi-forum.org), sign up for the mailing list, and get involved in person.
Source: Oak Ridge National Laboratory
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?