Blog Posts from March, 2011

More of What Testers Find

Wednesday, March 30th, 2011

Damn that James Bach, for publishing his ideas before I had a chance to publish his ideas! Now I’ll have to do even more work!

A couple of weeks back, James introduced a few ideas to me about things that testers find in addition to bugs.  He enumerated issues, artifacts, and curios.  The other day I was delighted to find an elaboration of these ideas (to which he added risks and testability issues) in his blog post called What Testers Find.  Delighted, because it notes so many important things that testers learn and report beyond bugs.  Delighted, because it gives me an opportunity and an incentive to dive into James’ ideas more deeply. Delighted, because it gives us all a chance to explore and identify a much richer view of testing than the simplistic notion that “testers find bugs”.

Despite the fact that testers find much more than bugs, let’s start with bugs.  James begins his list of what testers find by saying

Testers find bugs. In other words, we look for anything that threatens the value of the product.

How do we know that something threatens the value of the product?  The fact is, we don’t know for sure.  Quality is value to some person, and different people will have different perceptions of value.  Since we don’t own the product, the project, or the business, we can’t make absolute declarations of whether something is a bug or whether it’s worth fixing.  The programmers, the managers, and the project owner will make those determinations, and often they’re running in different directions.  Some will see a problem as a bug; some won’t.  Some won’t even see a problem. It seems like the only certain thing here is uncertainty.  So what can we testers do?

We find problems that might threaten the value of the product to some person who matters. How do we do that? We identify quality criteria–aspects of the product that provide some kind of value to customers or users that we like, or that help to defend the product from users that we don’t like, such as unethical hackers or fraudsters or thieves.  If we’re doing a great job, we also to account for the fact that users we do like will make mistakes from time to time.  So defending value also means making the product robust to human ineptitude and imperfection.  In the Heuristic Test Strategy Model (which we teach as part of the Rapid Software Testing course), we identify these quality criteria:

  • Capability (or functionality)
  • Reliability
  • Usability
  • Security
  • Scalability
  • Performance
  • Installability
  • Compatibility
  • Supportability
  • Testability
  • Maintainability
  • Portability
  • Localizability

In order to identify threats to the quality of the product, we use oracles.  Oracles are heuristic (useful, fast, inexpensive, and fallible) principles or mechanisms by which we recognize problems.  Most oracles are based on the notion of consistency.  We expect a product to be consistent with

  • History (the product’s own history, prior results from earlier test runs, our experience with the product or other products like it…)
  • Image (a reputation our development organization wants to project, our brand identity,…)
  • Comparable products (products like this one that we develop, competitors’ products, test programs or algorithms,…)
  • Claims (things that important people say about the product, requirements, specifications, user documentation, marketing material,…)
  • User expections (what reasonable people might anticipate the product could or should do, new features, fixed bugs,…)
  • Product (behaviour of the interface and UI elements, values that should be the same in different views,…)
  • Purpose (explicitly stated uses of the product, uses that might be implicit or inferred from the product’s design, no excessive bells and whistles,…)
  • Standards (relevant published guidelines, conventions for use or appearance for products of this class or in this domain, behaviour appropriate to the local market,…)
  • Statutes (relevant laws, relevant regulations,…)

In addition to these consistency heuristics, there’s an inconsistency heuristic too:  we’d like the product to be inconsistent with patterns of problems that we’ve seen before.  Typically those problems are founded in one of the consistency heuristics listed above. Yet it’s perfectly reasonable to observe a problem and recognize it first by its familiarity. We’ve seen lots of testers do that over the years.

We encourage people do come up with their own lists, or modifications to ours. You don’t have to use Heuristic Test Strategy Model if it doesn’t work for you.  You can create your own models for testing, and we actively encourage people who want to become great testers to do that.  Testers find models, ways of looking at the product, the project, and testing itself, in the effort to wrestle down the complexity of the systems we’re testing and the approaches that we need to test them.

In your context, do you see a useful distinction between compatibility (playing nice with other programs that happen to co-exist on the system) and  interoperability (working well with programs with which your application specifically interacts)?  Put interoperability on your quality criteria list.  Is accessibility for disabled users so important for your product that you want to highlight it in a separate quality criterion?  Put it on your list. Recently, James noticed that explicablility is a consistency heuristic that can act as an oracle too:  when we see behaviour we can’t explain or make sense of, we have reason to suspect that there might be a problem.  Testers find factors, relevant and material aspects of our models, products, projects, businesses, and test strategies.

When testers see some inconsistency in the product that threatens one or more of the quality criteria, we report.  For the report to be relevant and meaningful, it must link quality criteria, oracles, and risk in ways that are clear, meaningful, and important to our clients. Rather than simply noticing an inconsistency, we must show why the inconsistency threatens some quality criterion for some person who matters.  Establishing and describing those links in a chain of logic from the test mission to the test result is an activity that James and I call test framing.  So:  Testers find frames, the logical relationships between the test mission, our observations of the product, potential problems, and why we think they might be problems. James gave an example of a bug (“a list of countries in a form is missing ‘France'”). That might mean a minor usabilty problem based on one quality criterion, with a simple workaround (the customer trying to choose a time zone from a list of countries presented as examples; so pick Spain, which is in the same time zone). Based on another criterion like localizability, we’d perceive a more devastating problem (the customer is trying to choose a language, so despite the fact that the Web site has been translated, it won’t be presented in French, cutting our service off from a nation of 65 million people).

In finding bugs, testers find many other things too.  Excellent testing depends on our being able to identify and articulate what we find, how we find it, and how we contextualize it. That’s an ongoing process.  Testers find testing itself.

And there’s more, if you follow the link.

You Won’t See It Until You Believe It

Thursday, March 24th, 2011

Not too long ago, I updated my copy of Quicken. I hesitate to say upgrade. I’ve been using Quicken for years, despite the fact that the user interface has never been wonderful and has consistently declined a little in each version.

One of these days, I’ll do a 90-minute session and record some observations about the product. But for now, here’s one.

The default sort order for transactions in an account listing is by date, from earliest to latest. There are options whereby you can sort by reference number, payee, the amount of money spent or received, or the category. On the right side, there’s a scroll bar. As with pretty much all scroll bars, there’s a thumb—the button-like thing that one drags on to make the scrolling happen. No matter what I’ve chose for the sorting order, the tooltip associated with the thumb stubbornly continues to display the date, and the listing doesn’t update until I have let go of the scroll bar. So the tooltip is useless, and I can’t tell how far I need to scroll.

There are a zillion little problems like that in the product that make it unnecessarily hard to use. As I’ve maintained so often before, you can’t tell from the outside that no one tested the scroll bars, but I can guarantee that no one fixed them.

Upon updating the product, I was asked to fill out a survey. Aha! A chance to provide feedback! One of the survey questions was “What was your primary reason for upgrading Quicken?”

I wanted to respond, “Anticipated bug fixes.” I wanted to respond “I was hoping against hope to see some of the user interface problems in the previous versions sorted out.” The choices that I was offered were very close to this (I didn’t record them at the time, but a later online survey offered me these choices, which are close to what I remember):

  • I received an email from Quicken/Intuit
  • My previous version was no longer supported
  • I saw it advertised
  • I wanted specific new features
  • I saw a new version in stores
  • Banker/Financial advisor recommended I upgrade
  • I read a news article that mentioned the new Quicken version

In the survey included as part of the product update, there was no “Other” with a text box to indicate why I was really updating. There was no “Other” at all. (There was an “other” option in a subsequent survey form, of which I was notified through email.) This is how marketers get to make the assertion, “No one is interested in bug fixes.” They don’t see the evidence for it. But if you systematically place blinders over your eyes, you won’t see the evidence for much of anything other than what’s right in front of you.



Marshall McLuhan is rumoured to have said, “I wouldn’t have seen it if I hadn’t believed it.” If you want to observe something, it helps to believe that it’s possible. At least, it helps not to constrain your capacity to observe something that you didn’t expect. For the same reason, test cases with pre-defined and closed outcomes intensify the risk that a tester will be blind to what’s going on around. For the same reason, certification exams that present exactly four multiple-choice answers will fail to evaluate the nuances and subtleties of what a tester might observe and evaluate.

Managers, please take note!