June 16, 2006
"Software bugs are part of the mathematical fabric of the universe. It is impossible with a capital 'I' to detect or anticipate all bugs."
So says Ben Liblit, an assistant professor of computer sciences at the University of Wisconsin-Madison. The article which describes his work is in this week's issue of HPCwire.
Liblit's method to detect software misbehavior enlists people with real applications to help attack bugs in their natural habitat. He does this by allowing users to define the nature of the bugs themselves -- crashing, hanging, invalid output, etc., and then instrumenting the application code accordingly so that it can capture the error condition as it occurs. The results are then gathered and analyzed to help identify the bugs and correct the code.
Today Liblit's work is being used by the open source community as a way to do more rigorous post-deployment debugging on a variety of applications. Apparently it has also attracted the attention of IBM and Microsoft.
And me as well. I recently contacted Liblit to get his perspective on why software continues to be such a problematic piece of the information technology puzzle. In high performance computing, we tend to focus on the challenges of injecting parallelism into our code, but HPC also shares the larger problem of overall software quality. And as HPC applications become more complex in order to address multifaceted problems, the challenge to develop quality software will increase.
Liblit illustrates the basic limitation of software using the "halting problem," which can be described as follows: Given a program and its initial input, determine whether the program ever halts or continues to run forever. Seventy years ago, Alan Turing mathematically proved that an algorithm to solve the halting problem cannot exist. Essentially what he was saying was that if you were to try to write a program that would tell you whether other programs hang or not, there is no way that such a program, itself, is guaranteed not to hang. This may seem like just an inconvenient factoid for computer scientists, but it reveals a fundamental problem for anyone who develops software.
"Mathematically it is impossible to take a non-trivial piece of code and prove that it never hangs," says Liblit. "It's not that we haven't been smart enough to figure out how to do it; we're smart enough to have figured out that it can't be done!"
Liblit goes on to characterize software as a chaotic system, with extreme sensitivity to initial conditions. That means it's very hard to predict how it is going to behave during execution. And that's why, despite all sorts of software testing methodologies that are being used today, bugs continue to inhabit our production code.
This got me to thinking about the nature of the hardware-software dichotomy, which seems to be especially noticeable in high performance computing, but exists across the entire IT industry. And that leads to the question: Why is hardware advancing so rapidly and software not? As processors increase in performance every year, the code running on them is not much better than it was ten years ago. There is no Moore's Law for software.
This is not to suggest that hardware doesn't fail. But hardware failures mostly involve physical breakdowns -- crashing disks, dropping bits, etc. The Mean Time Between Failure (MTBF) characteristic is usually well accounted for during system design. For example, Google's cluster management software expects servers to malfunction on a regular basis and can reroute search engine processing rather transparently. These types of problems are manageable because they're predictable.
Hardware logic errors are more rare, but they do occur. For example, the famous Pentium floating-point-divide bug of 1994 precipitated a chip recall. But why aren't these types of problems seen more frequently? There may be a few things at work here. One is that there's so much more software logic than hardware logic in the world. For every microprocessor, like the Pentium, there are thousands or tens of thousands of applications. And the software developers that wrote those applications probably didn't perform the level of testing that Intel applied to its Pentium chip design.
Another difference is that many applications are more complex than a typical CPU -- in some cases, much more complex. On my PC at work, the Windows XP OS and some of the associated applications are regularly updated with patches, presumably to fix software problems. To its credit, XP is much more stable than its predecessors as far as crash frequency, but new bugs are being discovered weekly. This is not too surprising. XP along with the applications on a typical PC workstation represent tens of millions of lines of source code.
Don't make the mistake of thinking processors are getting more complex because the transistor count is going up. Today, the increase in transistors mostly has to do with adding cores and increasing cache size. These don't add logic complexity. The new "Montecito" Itanium microprocessor contains about 1.7 billion transistors, but only about 20 million or so are in the CPU logic. In fact, the move to multi-core should actually make the hardware simpler, since each core is expected to do proportionately less work.
Software is heading in the other direction. As users demand more features and functionality from their applications, the code gets more ever more complex. Window NT 3.1 had around 6 million lines of source code; Windows XP contains over 40 million lines. But as programs become more complex, they also become more susceptible to bugs. The public perception is that the hardware makers are heroes, while the software developers have let us down.
Even within the industry, there seems to be a perception that hardware and software are symmetrical elements of a computing system. The expectation is that both technologies should be able to advance in concert. But the symmetry is an illusion. Processors have become multi-core as part of a well-defined technology roadmap. Meanwhile, the corresponding move to application parallelism has become a crisis. Software seems to be much more resistant to engineering than hardware.
"I don't know that we're doing a very good job of communicating that to the public, and maybe to software engineers," says Liblit. "I don't think software engineers appreciate the near impossibility of doing their job right."
But it's not hopeless. Software is getting more robust. Again, just look at XP. Applications don't have to be perfect to be useful. The text editor program I'm using to compose this article occasionally goes a little nutty and adds a bunch of blank characters at the end of the file. I just delete them and go on.
But some users can't afford to be so forgiving. If your application is managing a stock portfolio for thousands of investors or controlling a nuclear warhead, losing track of data can have serious consequences. Code for mission-critical systems must be held to a higher standard -- safety-critical code, even more so. Productivity is one thing, but when someone's money or life is at stake, buggy software is not an option. Software engineering advancements are truly needed. Are any solutions are emerging? The answer to that will have to wait for a future article.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - June 15, 2006 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?