September 20, 2010
MATLAB users with a taste for GPU computing now have a perfect reason to move up to the latest version. Release R2010b adds native GPGPU support that allows user to harness NVIDIA graphics processors for engineering and scientific computing. The new capability is provided within the Parallel Computing Toolbox and Distributed Computing Server.
MathWorks released R2010b in early September, and is taking advantage of this week's NVIDIA GPU Technology Conference in San Jose, California, to demonstrate the new GPU computing support. Early adopters, though, have already had a chance to check out the software. A beta version of the GPGPU support was unveiled at SC09 last November, attracting hundreds of customers who wanted to give the new capabilities a whirl.
According to Silvina Grad-Freilich, senior manager for Parallel Computing at MathWorks, that was about five or six times more beta registrations than they were anticipating. They also were somewhat surprised to see such a wide range of users sign up. "We were expecting to receive requests from people in very defined areas like finance or academia," said Grad-Freilich. "Interestingly enough, customers from all of the industries that we sell to registered for the beta."
The initial support for GPUs is confined to NVIDIA gear, and only for those CUDA-supported devices with a compute capability of 1.3 or higher. In the Tesla product line, that equates to the 10-series and 20-series (Fermi) GPUs. The rationale for limiting support to the late-model CUDA GPUs had to do with lack of double-precision floating point support and IEEE compliance in pre-1.3 CUDA GPUs. The MATLAB team felt both were required to make GPU computing a worthwhile capability for its customer base of scientists, engineers, and quantitative analysts.
Access to the GPU can be accomplished in two ways: via invocation of existing CUDA kernels and through high-level programming support that has been incorporated into MATLAB. Using the first method, users who are ahead of the curve GPGPU-wise will be able to leverage already-developed CUDA software, allowing them to call CUDA kernels inside MATLAB applications. But according to Grad-Freilich, they expect most MATLAB users will want to employ the new high-level support to get access to the graphics processors.
For native MATLAB GPU support, code changes to existing apps should be relatively minor. At minimum, the developer needs to invoke one call (gpuArray) to transfer the data array to the GPU and another call (gather) to transfer it back to the CPU host. The computations in between can use existing MATLAB built-in functions that have been overloaded to work on GPU arrays. GPUs can also be accessed with custom MATLAB functions provided by the user, simply by plugging the GPU array parameters into the function invocation. In the initial release, MathWorks has overloaded over 100 of the most commonly-used mathematical functions for GPU computing. Here is a simple GPU computing code snippet:
>> A = someArray(1000, 1000);
>> G = gpuArray(A); % Transfer data to GPU memory
>> F = fft(G); % computation on the GPU
>> x = G\b; % computation on the GPU
>> z = gather(x); % Bring back into the MATLAB host
The new support also includes the ability to distribute an application across a GPU cluster or a multi-GPU workstation, using MATLAB's parallel for loop (parfor). In this scenario, computations in the parallelized loop are executed on multiple GPUs in the user's setup. Because of the abstraction of MATLAB parallelization, the source code is portable across different types of multi-GPU configurations -- workstations, clusters and grids.
By offering this simple interface, MATLAB is able to hide all the gritty GPU details of hardware initialization, data transfer and memory management from the user. And since the average MATLAB user is a domain specialist rather than a professional C/C++ programmer, this allows them to remain in their software comfort zone. On the other hand, many MATLAB apps are intended only for prototyping. When they go into production, they may end up as professionally-developed C/C++ programs, the idea being to improve performance.
One of the nice outcomes of GPU acceleration is that some MATLAB codes can be made fast enough for production deployment. The speedups for some algorithms are on par with other GPGPU accelerated apps. In MathWorks' own tests, they were able to demonstrate a 50-fold computational speedup on a GPU versus the CPU implementation. In this case, the program was a spectrogram application using FFT functions, and executed on a 16-node GPU cluster.
However, when the CPU-to-GPU data transfer time was factored in, the measured speedup was just five-fold. That still represents very respectable acceleration, but it illustrated the performance penalty of the data transfers back and forth across the PCIe link (as well as, in this case, the GigE network of the cluster). Perhaps the more salient metric is the number of FFTs that can be managed by the different processors. The CPUs can only process a handful of FFT functions at a time, while the GPUs can handle millions, giving the GPU implementation much greater scalability
Although GPGPU is a new feature for MATLAB, there is already a lot of capability included for users who happen to have access to the newer NVIDIA hardware. The intention is to grow this functionality across the next several releases. To get a more detailed look and what's available today, check out the MATLAB GPU Support web page.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?