Blog: The Most Serious Programming Error

Yesterday, Nestor Arellano from ITWorld Canada, an online publication, called me and sent an email, seeking comment on a press release about the Top 25 Programming Errors. I glanced at the list of errors that he had sent me, and it seemed to me that they had missed the most important programming error of all: failing to ask if the program does something useful or valuable, and failing to consider what might threaten that value.

Now, as it turns out, the headline oversimplified things. First, it masked an important distinction: the list was actually about the Top 25 Programming Errors that lead to security bugs and that enable cyber espionage and cyber crime. Moreover, it made an astonishing claim: “Agreement Will Change How Organizations Buy Software”.

After a chat on the phone, I prepared a written reply for Nestor. His story appears here. Most of my comments are based on our voice conversation, but I thought that my written remarks might be of general interest.

As a tester, a testing consultant, and a trainer of testers, I think that it’s terrific that this group of experts has come out with a top-25 list of common programming errors.

In the press release to which you provided a link (, the paragraphs that leapt to my attention were these:

“What was remarkable about the process was how quickly all the experts came to agreement, despite some heated discussion. ‘There appears to be broad agreement on the programming errors,” says SANS Director, Mason Brown, “Now it is time to fix them. First we need to make sure every programmer knows how to write code that is free of the Top 25 errors, and then we need to make sure every programming team has processes in place to find, fix, or avoid these problems and has the tools needed to verify their code is as free of these errors as automated tools can verify.'”

“Until now, most guidance focused on the ‘vulnerabilities’ that result from programming errors. This is helpful. The Top 25, however, focuses on the actual programming errors, made by developers that create the vulnerabilities. As important, the Top 25 web site provides detailed and authoritative information on mitigation. ‘Now, with the Top 25, we can spend less time working with police after the house has been robbed and instead focus on getting locks on the doors before it happens,’ said Paul Kurtz, a principal author of the US National Strategy to Secure Cyberspace and executive director of the Software Assurance Forum for Excellence in Code (SAFECode).”

Yet there’s an issue hidden in plain sight here. The reason that consensus was obtained so quickly is that the leaders in the programming, testing, and security communities at large have known about these kinds of problems for years, and we’ve known about how to fix them, too. Yet most organizations haven’t done much about them. Why not?

Quality is (in Jerry Weinberg’s words) value to some person. Quality is not a property of software or any other product. Instead, it’s a relationship between the product and some person, and that person’s value set determines the relationship. It’s entirely possible to create a program that is functionally correct, robustly secure, splendidly interoperable, and so forth, but it’s not at all clear that these dimensions of software quality are at the top of the priority list for managers who are responsible for developing or purchasing software. The top priorities, it seems to me, are usually to provide something, anything that solves one instance of the problem with a) the fastest availability or time to market, and b) the lowest cost of developing or purchasing the software. Managers have problems that they want to solve, and they want to solve them right now at the lowest possible cost. This creates enormous pressure on programmers, testers, and other developer to produce something that “works”—that is, something that fulfills some requirement to some degree; something that fulfills some dimension of value but that misses others; something that gets a part of the job done, but which includes problems related to reliability, security, usability, performance, compatibility, or a host of other quality criteria.

Managers often choose to observe progress on a project in terms of “completed” features, where completion means that coding has been done to the degree that the program can perform some task. A more judicious notion of a completed feature is one that has also been thoroughly reviewed, tested, and fixed to provide the value it is intended to provide. This requires some time and a good deal of critical thinking, asking, “What values are we fulfilling? What values might we be ignoring or forgetting?”

Microsoft’s Vista provides a case in point. This is a product for which, in both development and testing, there was a tremendous focus on functional correctness. A huge suite of automated tests was executed against every build, and Microsoft doubled and redoubled its security efforts on the product—it focused on putting locks on the doors before the robbery happens, as Mr. Kurtz puts it above. The hardware driver model was made more robust, but this had the effect of making many older drivers obsolete, along with the printers, cameras, or sound cards that they supported. In addition, the user was prompted for permission to perform certain actions that, according to the operating system’s heuristics, posed some kind of security risk. Yet many people have hardware whose drivers turned out to be incompatible with Vista, and even more people were baffled by questions about security that, as non-experts, they were in no position to answer. Thus for many of its customers, Vista is like living inside a bank vault with a doltish security guard at the front door, asking you if it’s okay to perform an action that you’ve just initiated, and insisting that you have to upgrade your old DVD player. Microsoft, in responding to one set of values, clobbered another, whether inadvertently or intentionally.

According to the headline, “Experts Announce Agreement on the 25 Most Dangerous Programming Errors – And How to Fix Them / Agreement Will Change How Organizations Buy Software“. I doubt that the agreement will change how organizations buy software, because practically all of the problems identified in the agreement are, at the time of purchase, invisible to the typical purchasing organization. There is a way around this. The vendors could manage development to foster system thinking, critical thinking, multi-dimensional thinking about quality, and the patience and support to allow it all to happen. The purchasing organizations could employ highly skilled testers of their own, and could provide time and the support to review and probe the software from all kinds of perspectives, including the source code—to which the vendors would have to agree. Yet software development has some harsh similarities to meat production: the vendors rarely want to let the customers see how it’s being done, or many customers would swear off the whole business.

So I believe that it’s wonderful that yet another organization has come out with a list of software vulnerabilities, and perhaps this will have some resonance with the greater programming community. Yet in my view, the problem is not with what we do, but with how we think, how we manage our projects, and how we foster our culture. Many programmers are working in environments where thinking broadly and deeply about code quality—never mind overall system quality—goes unrewarded. Many testers are working in groups where testing is viewed as a rote, clerical activity that can be done by unskilled students or failed programmers. As a consultant who works all over the world, I can assure you that most development groups are not strongly encouraged to look outside their own organizations for resources and learning opportunities. If programmers obtain management support for collaboration with the customer on the larger issues of what’s important and why it’s important, then preventing the Top 25 programming errors will eventually become second nature. If testing becomes viewed as a skilled, investigative, and exploratory activity, focused on learning about the product and its value for the people using it, then we’ll make real progress. I applaud the initiative, but preventing the problems will take more work than simply listing them. It will also take a change in values from programmers, testers, and managers alike.

Want to know more? Learn about upcoming Rapid Software Testing classes here.

One response to “The Most Serious Programming Error”

  1. Agile Scout says:

    wow. solid read. you would think that the very first thing would be to understand whether “value” is created.
    we get so bogged down in the details sometimes…

Leave a Reply

Your email address will not be published. Required fields are marked *