April 18, 2008
The shift to multicore architectures in commodity microprocessors brings with it the reality that, as Marc Snir said at a recent press conference (http://www.hpcwire.com/hpc/2246496.html), "programming" and "parallel programming" must become synonymous. The processors we are seeing now, and will continue to see in at least the medium-term, will offer performance improvements only to those applications that can take advantage of many cores at one time. Since software customers generally expect applications to do more in less time, software developers have a strong incentive to parallelize their codes. But developers generally don't have the skills they need to make this change.
Allinea Software sees this as the perfect opportunity for their particular expertise. The company's flagship product, the Distributed Debugging Tool (DDT), has seen wide adoption, and Allinea has recently introduced its Optimization and Porting Tool (OPT). They are one of those companies that is trying a business model in HPC that is now tried and true: build a technology that appeals to a mass customer base, and tweak it to capture the HPC niche.
When the company started in 2003, the first step in the business plan was to prove they could build a debugger that people would buy. As Allinea assessed the technology landscape at the time, they realized that the primary competition for any company looking to get into the debugger business isn't another company at all. As Jacques Philouze, the vice president for sales and marketing at Allinea, puts it, "Our competition was printf; it's easy to use, everyone is familiar with it, and it's free."
As a UK-based company, Allinea initially targeted a market that Philouze says wasn't well served at the time: academic institutions. "At the time European academic institutions weren't using many tools at all," says Philouze. "Cost was a real issue for them." Their strategy paid off. Allinea demonstrated that they could build a product that people would buy, and according to the company today, DDT is used in the majority of European academic HPC centers on the TOP500 list.
At the low end, the company is still competing with printf, the brute force method used by developers to trace code execution at run-time. Unfortunately printf isn't thread-aware, and as chip vendors increase the core count in their offerings, developers will increasingly need a more sophisticated approach to debugging. Allinea sees its DDT product as particularly attractive to these customers.
"Our customers are choosing us not only because we are cost effective, but also because we have a very easy learning curve," says Philouze. "After one hour of familiarization with DDT, you are ready to debug your code."
The company's mass market dreams got a boost last year when Microsoft and Allinea announced that DDT was making its way into Microsoft's Visual Studio development environment for Windows, as a plug-in called DDT Lite, that will help users debug threaded applications on multicore processors.
Allinea began partnering with vendors in 2005 with a port of DDT to Linux on IBM's Power line. Today DDT is supported on Solaris, AIX, and Linux operating systems and x86, x86_64, Itanium, Power, UltraSparc, and Cell processors. It also supports a variety of compilers and, according to company's Web site, "all known MPI implementations." This covers HPC platforms ranging from Cray's XT line to HP clusters and Blue Gene (on the way). Recent releases have also added specific features for debugging at large scale, and the ability to debug hybrid MPI/OpenMP codes through the Parallel Stack View.
All this platform diversity, along with the company's success in high-end computing -- first in the European HPC market and then in the U.S. with customers like LLNL and TACC -- has given Allinea insight into the problems inherent in debugging very large-scale parallel applications. Many of the straightforward approaches that work with a parallel debugger that is aimed at helping the commodity software developer manage eight cores in a single socket simply don't scale to hundreds or thousands of processors. In fact, according to Philouze, programmers are often overloaded with standard graphical debugging metaphors beyond 64 processors.
How to best manage the information needed to debug an application running on thousands of processors is an open question actively being pursued by companies and research programs alike, and Allinea is part of these efforts. DDT has features that enable programmers to dynamically group processes of interest and focus their investigation on only those processes, as well as information display options and summary views to help manage information complexity at scale. And DDT is designed to provide as much assistance as possible to the user. For example, users are presented with more complex information by default on low core-count debugging sessions, but at higher core counts the application automatically switches to summary views to help users manage complexity.
Allinea is also working to advance the state of the art in debugging at scale as it collaborates with major efforts on both sides of the Atlantic. In the U.S., the company has partnered with TACC to investigate best practices for managing complexity on Ranger-scale problems, and in Europe the company is part of the three-year Parallel Programming for Multi-core Architectures (ParMA) effort to improve the state of parallel programming tools.
But where is debugging at scale headed? There are obviously immense challenges for users in managing information about thousands of processors at one time. I talked with Katie Antypas at the National Energy Research Scientific Computing Center (NERSC) about her views on this topic. Antypas was the author of a study written last year that reviewed the two major large-scale parallel debugging solutions, DDT and the Totalview debugger.
Given the information management challenges in debugging applications at large scale, what is it that developers are looking for? According to Antypas, "...users want relatively basic features in a parallel debugger, the ability to set breakpoints, step through code, examine variables and view core files. From this perspective, we encourage parallel debugger tool developers to focus on 'ease of use' and, in particular, a low learning curve GUI interface as many users will only have to use a parallel debugger a handful of times a year."
Centers also face challenges in balancing the needs of users who want to move their applications quickly through batch queues and developers who often need to debug their jobs interactively, sometimes on large processor counts. This tension is compounded by the observation that for many large HPC centers their users may be spread all over the world. "Interactive debugging at high concurrencies can also pose job scheduling difficulties for centers like NERSC, since cores must be idle and available if the user is to debug a job right away and not wait in the queue for resources to become available," says Antypas. "Finally, for a center like NERSC with primarily remote users, running an interactive GUI from across the country can be slow and cumbersome."
Karl Schulz from the Texas Advanced Computing Center agrees that the combination of interactive graphical debugging and batch environments can cause problems. "My experience running on various HPC systems is that often the graphical debugger may have stopped working with a particular MPI upgrade, or requires jumping through several hoops to handle X display management, which can be frustrating for a user who is in debug mode -- and folks revert to printf debugging and are not willing to try much else the next time." Schulz comments that DDT's implementation is helpful in this regard because only the login node requires a working X Windows installation. "This approach helps our integration efforts in that a user only needs to verify a working X functionality to the login node, and nowhere else," he explains.
Antypas suggests that a possible solution for some of these problems is to create a combined batch and interactive debugging environment. Users could submit problem codes as batch jobs, and receive back an analysis with an execution trace, uninitialized variable warnings, and so on. As she says, "This might be enough to solve many user problems, and the remaining users could fire up an interactive debugging session with a better idea of where to begin." An interesting suggestion.
The first problem that developers face in creating a parallel code is to create a code that executes properly and provides the right answers. That's what DDT is for. But once the application runs correctly, it has to run faster, and Allinea is working to address this issue as well.
Allinea's Optimization and Porting Tool, or OPT, is the company's latest product. OPT incorporates several features that are aimed at the non-specialist in a product space dominated by some very capable and, in some cases, free applications. OPT includes a call graph display that gives developers immediate insight into where their application is spending its time. Messages are displayed on a timeline so that resources wasted in mismatched send/receive times, or in barriers, can be quickly identified. Interestingly, OPT maintains a database of performance analysis sessions so that users can keep a historical perspective on what changes have helped application performance, and which have hurt.
As multicore processors bring parallelism to all programmers, free tools and ad hoc (but time-tested) approaches to debugging will start to break down, increasing demand for better tools. That the availability of robust tools for parallel programming is the first step at having robust, general-purpose parallel applications is widely recognized, and is the motivating force behind efforts like the recently-announced parallel computing research centers funded by Microsoft and Intel. If HPC continues its push down-market and grows substantial usage in machines less than 64-sockets, then Allinea's strategy to push a single debugging solution from the desktop to the high-end supercomputer will put them in a good position early in the market adoption cycle.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?