September 18, 2008
Despite the popularity of the Linpack benchmark, the majority of HPC users have already moved past a pure performance mentality. The most popular metrics being bandied about today are price-performance and performance-per-watt. But that still restricts our view of HPC investments to a relatively narrow aspect of system costs and benefits.
Over the past few years, there has been a lot of interest in looking at overall productivity as a way to manage and optimize computing investments. But that's where it gets squishy. Productivity is an abstract concept. Whereas everyone can more or less agree on how many teraflops a given system is capable of, it's much more difficult to measure how productive that system can be on a day-to-day basis. And to be useful, productivity has to incorporate all the facets of the HPC environment: humans, software and hardware. Devising a set of metrics to quantify those elements is the key.
As a principle proponent of the high productivity computing meme, Tabor Research has been evangelizing this approach to HPC since 2007. A handful of HPC vendors, as well as this publication, have jumped on the productivity bandwagon with varying degrees of enthusiasm. [Disclaimer: Tabor Research and Tabor Publications, publisher of HPCwire, are both owned by Tabor Communication, Inc.]
On Thursday, Tabor Research launched its HPC Productivity Analyzer, an online tool designed to help HPC lab directors and datacenter managers evaluate and improve their computing investments. "We're extremely exited about offering this to users," said Addison Snell, vice president and general manager of Tabor Research. "We've talked about productivity for years, but it's always been a hand-wavy kind of concept. This is the first methodology that takes a quantifiable look at how productive your HPC ecosystem is."
The concept for the tool grew from a research project with Microsoft in which the software giant was looking for ways to quantify productivity. Tabor Research took some of the early ideas from that engagement and evolved them into the HPC Productivity Analyzer.
Essentially the tool is a survey that collects information about the nature of your HPC infrastructure and the organization that surrounds it. As you enter the survey, you first fill out a site profile for basic information about the type and size of your organization, as well as the general nature of your IT hardware. At that point, you drop into the survey proper, which guides you through a series of ten questions that are used to capture organizational priorities and the way your HPC systems are being used.
Snell says the most critical information is captured in the first couple of questions, which ask you to choose the three most important metrics that you believe are driving productivity at your site, and to rank the purchase criteria for selecting HPC systems. The remaining questions address cost considerations, standards, software usage, physical prototyping, organizational structure, and funding.
Next comes workflow analysis. Here you estimate the workflow breakdown for three different roles: the end user, the system administrator, and the application developer. Each workflow is role-specific. For example, only the application developer has a coding phase. (If your system admin is spending time writing code, you have more fundamental problems than optimizing productivity.) The workflow analysis component is the slickest part of the tool.
The interface is very intuitive. With the mouse, you drag the edge of the workflow phase boxes to increase or decrease the relative amount of time you think is being spent in each phase. The other boxes adjust auto-magically so that the entire workflow always adds up to 100 percent. Hovering over a phase box lists the tasks mapped to that particular one. And if you click on the box, a secondary set of boxes appears that allows you to specify task allocations under that phase.
The visual aspect of the workflow analysis component makes it easy for the user to get an accurate reflection of time allocation. According to Snell, beta users of the tool found that the exercise of estimating workflow allocation was instructive in itself. (Do we really spend only 10 percent of our maintenance phase implementing bug fixes?) Most of them hadn't fully considered where their time is actually getting spent, said Snell.
After completing the questionnaire and workflow analysis, hitting the Submit button will display your productivity results and offer some recommendations. The first set of results compares your workflow allocation to those of your peers in the general sector you occupy (industry, government or academia) and to those of your peers in all HPC sectors. The analysis focuses on workflow allocations that may be out of line relative to your peers, and tells you why this is important to your organization. For example, application development becomes more important if you're relying on in-house code versus ISV codes. Likewise, anything having to do with system administration becomes more important if you consider admin costs to be significant to your TCO.
Based on the results, the recommendations offer ways for you to optimize your workflow. Internally, the tool draws on a library of several dozen recommendations that map to particular scenarios. Snell said the library will continue to grow and become more refined as more data is collected for specific circumstances. "We are picking recommendations based on what phase you're having trouble with and other factors having to do with your type of organization," he explained. "So that is the secret sauce -- that plus the overall methodology on how to evaluate productivity. Nobody's ever quantified this before."
The peer database has been populated by over 100 early access users, and as more people exercise the tool, the database will be updated dynamically. Since this is version 1.0 of the HPC Productivity Analyzer, user feedback is being sought, both to improve the interface and to refine the methodology. The tool is free to use after registration and is available at http://www.HPCproductivity.com if you want to give it a whirl.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?