December 01, 2006
Interactive Supercomputing Inc. (ISC) received a grant from the National Science Foundation (NSF) for a software development project that seeks to enable scientists to transparently run their simulations on parallel architectures. The NSF grant comes on the heels of a similar government research grant this month from Oak Ridge National Labs.
The National Science Foundation grant is a joint project with ISC and Northeastern University called Commercial grade automatic and manual parallelization and performance tools. ISC and Northeastern will develop toolkits that parallelize the algorithms and models resulting from popular desktop Very High Level Language (VHLL) applications such as Python and MATLAB, and will compare the efficiency and code quality produced versus customized codes developed in more traditional programming languages such as C and C++. The goal is to enable NSF-funded scientists and engineers to not only tap into the capabilities of parallel processing to solve huge computational problems, but to do so while minimizing development time.
Using a suite of high performance serial applications developed by researchers in the NSF Center for Subsurface Sensing and Imaging Systems headquartered at Northeastern, the team will identify high payoff opportunities for semi-automatic parallelization of the serial code developed in MATLAB, C and C++. The application suite provides a range of algorithms and techniques that help engineers and scientists understand physics-based wave and signal interaction under the surfaces of objects. These surfaces may include the ocean, the ground, human skin or a human cell. A common feature in all of these applications is that they process large image and sensor datasets. Consequently, the lack of computational processing power has hindered research in many of these problems.
"Researchers tackling problems on a desktop environment can only use phantom data sets or synthetic data," said David Kaeli, research thrust leader for the NSF Center for Subsurface Sensing and Imaging Systems at Northeastern. "This project will ultimately empower users to scale to exploring real data sets that can be processed on large parallel servers with many processors and large distributed memory systems, all within the comfort of their preferred desktop modeling tool."
The project will utilize ISC's Star-P software as the testing platform. Star-P is an interactive parallel computing platform that enables users to code algorithms and models on their desktops using VHLLs and run them interactively on parallel servers or clusters. It eliminates the need to re-program the applications in C, FORTRAN or MPI to run on parallel systems.
"If NSF scientists and engineers can interact in real time with huge sensing and imaging datasets through parallel systems, the accelerated research can help address challenges ranging from noninvasive breast cancer detection to underground pollution assessment," said Eckart Jansen, vice president of advanced development at ISC. "We're glad to help the NSF tackle this important research by bridging desktop tools with parallel clusters."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?