May 14, 2008
With few application programmers well-versed in parallel programming, and with dual- and quad-core processors spreading to all corners of the computing ecosystem, the demand for ready-to-use parallelized software is only going to get larger. That's why numerical libraries from a variety of vendors (e.g., Intel, NAG and Visual Numerics) now come with built-in parallelization.
The Mathworks is following the same path by integrating the company's Parallel Computing Toolbox with two MATLAB optimization tool sets: the Optimization Toolbox and the Genetic Algorithm and Direct Search Toolbox. Both are used to develop optimal implementations of typical MATLAB programs -- codes like engine design simulation or financial risk analysis.
The Parallel Computing Toolbox, which was originally launched as the Distributed Computing Toolbox in 2004, meets the application programmer half way to the parallel Promised Land. It extends MATLAB with new constructs such as the parallel for-loop (PARFOR), which allows the user to distribute code execution across multiple cores, multiple processors, or even a cluster. When executed on a single-core machine, PARFOR acts like a sequential for-loop. So the resulting code becomes portable across lots of different hardware setups, which not only allows you to run on different platforms, but also lets you share your software with family and friends.
The hard part is figuring out how to apply the parallel loops in the first place. By incorporating PARFOR-enabled code into the optimization solvers of the toolboxes themselves, the MathWorks engineers have done some of the heavy lifting in advance. Customers that are using the optimization solvers will automatically get the parallelized version when they pick up the next release. To get the speed-up benefit, the user just has to define the parallel resources they want to apply at execution time.
Users can explicitly switch off the built-in toolbox parallelization for a given session if they believe they can outdo the MATLAB programmers by parallelizing their own code. Theoretically, one could even mix parallelized user code with parallelized toolbox solvers, but according to Loren Dean, the director of engineering for MATLAB Products, that can be tricky.
The real goal here is to make code acceleration as transparent as possible without forcing users to sprinkle a lot of PARFORs throughout their programs. "Most of our users haven't done parallel programming yet," Dean told me. "This is a new area for them. So being able to fully leverage their multicore system or being able to leverage their cluster, without having to change their code, that's the real value for them."
Posted by Michael Feldman - May 13, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?