(I’ve changed a couple of details here to maintain confidentiality. On the other hand, I’ll bet something like this has happened where you work, too.)
A while ago I was working with a tester at a client site, and I observed a problem: one word in a status message dialog box was misspelled. Due to a frozen severity classification system and (worse) to the kind of thinking that accompanies it, the tester initially classified this as a low-severity bug. I believed, as I still do, there were good reasons to classify it as higher, even though it might be easy to fall into thinking of spelling errors as trivial. As Jerry Weinberg suggests, it might be tempting to think of bugs as “important” or “trivial” based on the effort required to fix them, but there are other dimensions worth considering.
The spelling error might be a threat to the image of the development organization. A product that looks sloppy on the outside makes me mistrustful: “If they didn’t fix this immediately apparent bug, are there more subtle and more serious bugs that they didn’t fix?” And if you think that all spelling errors are created equal, consider what the people at the Cleveland Pubic Library might think.
The spelling error might add noise to the testing effort. Bugs that are prominent and easy to find tend to get reported, and often multiple times. Reporting a bug takes time away from test design and execution—and therefore from test coverage and other things that may well be more valuable. Duplicate bug reports are like weeds that have gone to seed in your tracking system. “Why didn’t you find that bug?” One very plausible answer is that we were busy dealing with other—and quite possibly less important—bugs.
The spelling error might hamper testability. I remember a product that reported “Configuration complete… exciting”. Truth in advertising; for that product, the fact that the configuration completed at all was fairly exciting. However, an automation script that looked for the correctly-spelled string never saw it. Humans have tacit knowledge that allows them to deal handily with spelling mistakes, as any reader of Twitter knows. Machines don’t perceive the intention of the communication repair it the way people do (look at Harry Collins’ The Shape of Actions for a description of his notion of repair). In fact, it took human eyes on on the output string to determine why the automation was failing, even though the configuration was being completed successfully. The automation shouldn’t be looking for result strings at the GUI level, you say? The GUI was relaying this string from a lower-level interface. The automation should be looking for error codes instead of result strings? Well, of course, the same kind of problem can happen with invalid error codes too, so look for such problems by observing outputs as directly as possible every now and then.
In a case like this one, it might be tempting to adapt the automation to work around the problem. I’d push back on that idea. Eventually someone will go to the trouble of fixing the spelling error, such that the automation that now depends on the error will begin to fail again. This is bad enough for testing in-house, but some products also get tested by people further downstream.
As Freud is rumoured to have said, sometimes a cigar is just a cigar. But a bug is never just a bug; a bug is a relationship between the product and some person—end-users, programmers; other testers. It can be easy to classify problems in a certain way based on a default categorization scheme, but for any bug, there may other factors that simply don’t map to the scheme. Remember to treat your categories as heuristics, not rules. Don’t fall aslep!