May 28, 2013
As we look ahead to the exascale era, many have noted that there will be some limitations to the MPI programming model.
According to researchers who work on the Global Address Space Programming Interface (GASPI), there are some critical programming elements that must be addressed to ensure system reliability as programmers construct codes that can scale to hundreds of thousands of cores and beyond.
The one-sided communication in GASPI is based on remote completion and targets highly scalable dataflow implementations for distributed memory architectures. As such one-sided communication does not require specific communication epochs for message exchanges. Rather data is written asynchronously whenever it is produced and data is locally available whenever a corresponding notification has been flagged by the underlying network infrastructure. Failure-tolerant and robust execution in GASPI is achieved through timeouts in all non-local procedures of the GASPI API. GASPI features support for asynchronous collectives.
The GASPI collectives rely on time-based blocking with flexible timeout parameters, where the latter range from minimal-progress tests to full synchronous blocking. GASPI supports passive communication and mechanisms for global atomic operations. The former mechanism is unique to GASPI and is most directly comparable to a non-time critical active message, which triggers a corresponding user-defined remote execution. Global atomic operations in GASPI allow to apply low-level functionality of e.g. compare-and swap or add-and-fetch to all data in the RDMA memory segments of GASPI.
The following shows how the GASPI segments are mapped to an architecture like Xeon Phi.
Just in time for the ISC 2013 in Leipzig, the GASPI consortium will release the new GASPI standard. GASPI is a PGAS API for developers who seek high scalability as well as low-level support for fault tolerant execution.
The creators say the GASPI API is very flexible and offers full control over the underlying network ressources and the pre-pinned GASPI memory segments. GASPI allows to map the memory heterogeneity (RAM, GPGPU, NVRAM) of modern supercomputers to dedicated memory segments and also offers the possibility to have multiple memory managements systems (e.g. symmetric and non-symmetric memory management) and/or multiple applications to co-exist in the same Global Partitioned Address Space.
The first implementation of GASPI is GPI-2, from Fraunhofer ITWM. GPI-2 implements the GASPI standard and will be available as a open source software shortly before the ISC13 in Leipzig and also at the booth of ITWM Fraunhofer during the event.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?