|The Leading Source for Global News and Information Covering the Ecosystem of High Productivity Computing / April 6, 2007|
Newport, Wednesday, day two. Rainy, and even colder than yesterday. Big, fat, wet, cold raindrops that soak right through your clothes. But the New England-iness of it all adds a nice atmosphere. So there's that.
I left before the panel discussion started at the end of the day -- had an airplane to catch. So there's no news from the panel discussion today.
Today's presentations pulled out threads that were started yesterday and developed them into full-fledged themes.
Several of the speakers were concerned about the tendency of large HPC procurements to focus ruthlessly on purchase price and its evil twin, price/performance. Concerns were split between raw financial health of the already limited primary vendor pool and the lack of incentive that such a low margin environment creates for vendors to add value anywhere but in raw FLOPS. Tabor Communication's Debra Goldfarb even called on HPC consumers to be better partners with vendors by developing a more complete picture of the value of an HPC solution, and then by being willing to pay for that value. I agree that not doing this could ultimately destabilize our supply chain.
There is a notable focus on the part of vendors to push development of the nascent HPC market on the very low end. This leads them to call for more focus on usability and accessibility, and for HPC customers to recognize this as value they want to inject in the HPC technology pipeline. There is also a call from several in the provider community to expand accessibility of HPC resources and permit adoption by new user communities. Yesterday, Dolores Shaffer's talk on the DARPA HPCS program focused on this topic, as it is a central theme of that effort, but it was also brought out in Debra Goldfarb's talk on HPC markets, Charles Romine's overview of national HPC policy-setting, and my own talk on the General Principles case for expanded use of HPC.
The theme of science and application requirements as drivers of the HPC hardware specification process, which I mentioned yesterday (highlighted by the NSF's Jose Munoz), continued today. Charles Romine of the NCO had an especially pragmatic approach to this given its impact on driving policy: policy makers want to focus on the problems that HPC can solve first, then talk about hardware later.
Again, all of today's speakers did an excellent job with another strong day of presentations. In the interest of brevity, I'll focus on three talks that I found particularly interesting (as always, selected totally subjectively) with no disrespect meant for anyone left off.
Debra Goldfarb, Tabor Communications Inc.
Debra focused on money and markets in HPC. She pointed out that big consumers of HPC typically have "Ferrari requirements with Yugo cost expectations," and my own personal experience says that she's dead on. The risk here, of course, is that in this environment companies will only add value where it improves their price/performance ratios.
She also made interesting observations about HPC startups with bad economic models. The problem is that if you price low upon entry to capture market share, you set a baseline and may not ever be able to recover. On the other hand, she feels that very large HPC companies with cross-product subsidies can achieve greater flexibility and more opportunity for market dominance.
The HPC market is becoming democratized as more and more users move in at the low end. This population of users could ultimately become the market driving force in HPC, particularly if those at the elite end of the consumer spectrum don't adopt a broader view of value and price/reward as we talked about earlier. Productivity is important for the new-to-HPC low end user as are a whole host of measures that the high end of HPC has never focused on -- things like energy efficiency, usability, accessibility, workflow, and metrics for answers produced, rather than FLOPS delivered. If it's true that what gets measured is what gets done, HPC could start to change in ways the high end consumers don't expect.
Rich Collier, TotalView Technology (formerly Etnus)
Both Rich Collier and Douglass Post (from day one) focused on software and the need to start to think seriously about processes for large scale science and engineering software in HPC. Rich's talk focused on project management, using the "Getting Real" methodology developed in the Web 2.0 world by that community's current darlings, the fellows at 37 Signals, a company offering web-based applications and services.
I found it very interesting to see those ideas starting to make it into HPC, and even more interesting that they are being baked into new software offerings from TotalView Technology.
John West, ERDC MSRC / DoD Modernization Program
I spent my talk focused on the case for why HPC is too hard to use, why it matters, and how we can start to change by specifically developing interfaces that are more supportive of user goals.
I won't go into more detail here, but if you're interested check out the separate piece in this week's HPCwire.
Charles Romine, Acting Director National Coordination Office (NCO)
Charles gave a talk on the national policy landscape in HPC, and on what motivates the directions that technology policy often takes. It was an interesting talk on a topic I hope to learn more about.
He pulled out a couple threads I think are worth sharing. First, he identified common themes in the three major HPC-related national reports we've had in the recent past: HECRTF's 2004 Federal Plan for High-End Computing, CSIA's National Agenda for Information Security in 2006, and the forthcoming Interagency Task Force for Advanced Networking (ITFAN) report. All three identify usability, efficiency, effectiveness and user satisfaction as key areas that need more attention in the major computing and computing support communities.
The second idea is the degree to which policy makers are influenced by certain types of science arguments. The "gold standard" argument for expanding computational science in the national research agenda is to identify the requirements -- the real world, mission-critical or important science problems that are difficult or impossible to solve now -- and then identify the subset of those problems that computation can impact.
Even more interesting to me is the reason that resource oversubscription is the weakest argument for more HPC. The rationale for this is that most HPC cycles aren't consumed in a fee-for-service model; they are "free" to users. Oversubscription of free resources isn't a very compelling argument for allocating more resources. Good stuff.