DevelopsenseLogo

More of What Testers Find, Part II

As a followup to “More of What Testers Find“, here are some more ideas inspired by James Bach’s blog post, What Testers Find. Today we’ll talk about risk. James noted that…

Testers also find risks. We notice situations that seem likely to produce bugs. We notice behaviors of the product that look likely to go wrong in important ways, even if we haven’t yet seen that happen. Example: A web form is using a deprecated HTML tag, which works fine in current browsers, but may stop working in future browsers. This suggests that we ought to do a validation scan. Maybe there are more things like that on the site.

A long time ago, James developed The Four-Part Risk Story, which we teach in the Rapid Software Testing class that we co-author. The Four-Part Risk Story is a general pattern for describing and considering risk. It goes like this:

  1. Some victim
  2. will suffer loss or harm
  3. due to a vulnerability in the product
  4. triggered by some threat.

A legitimate risk requires all four elements. A problem is only a problem with respect to some person, so if a person isn’t affected, there’s no problem. Even if there’s a flaw in a product, there’s no problem unless some person becomes a victim, suffering loss or harm. If there’s no trigger to make a particular vulnerability manifest, there’s no problem. If there’s no flaw to be triggered, a trigger is irrelevant. Testers find risk stories, and the victims, harm, vulnerabilities, and threats around which they are built.

In this analysis, though, a meta-risk lurks: failure of imagination, something at which humans appear to be expert. People often have a hard time imagining potentional threats, and discount the possibility or severity of threats they have imagined. People fail to notice vulnerabilities in a product, or having noticed them, fail to recognize their potential to become problems for other people. People often have trouble making the connection between inanimate objects (like nuclear reactor vessels), the commons (like the atmosphere or sea water), or intangible things (like trust) on the one hand, and people who are affected by damage to those things on the other. Excellent testers recognize that a ten-cent problem multiplied by a hundred thousand instances is a ten-thousand dollar problem (see Chapter 10 of Jerry Weinberg’s Quality Software Management, Volume 2: First Order Measurement). Testers find connections and extrapolations for risks.

In order to do all that, we have to construct and narrate and edit and justify coherent risk stories. To to that well, we must (as Jerry Weinberg put it in Computer Programming Fundamentals in 1961) develop a suspicious nature and a lively imagination. We must ask the basic questions about our products and how they will be used: who? what? when? where? why? how? and how much? We must anticipate and forestall future Five Whys by asking Five What Ifs. Testers find questions to ask about risks.

When James introduced me to his risk model, I realized that there people held at least three different but intersecting notions of risk.

  1. A Bad Thing might happen. A programmer might make a coding error. A programming team might design a data structure poorly. A business analyst might mischaracterize some required feature. A tester might fail to investigate some part of the product. These are, essentially, technical risks.
  2. A Bad Thing might have consequences. The coding error could result in miscalculation that misrepresents the amount of money that a business should collect. The poorly designed data structure might lead to someone without authorization getting access to privileged information. The mischaracterized feature might lead to weeks of wasted work until the misunderstanding is detected. The failure to investigate might lead to an important problem being released into production. These are, in essence, business risks that follow from technical risks.
  3. A risk might not be a Bad Thing, but an Uncertain Thing on which the business is willing to take a chance. Businesses are always evaluating and acting on this kind of risk. Businesses never know for sure whether the Good Things about the product are sufficiently compelling for the business to produce it or for people to buy it. Correspondingly, the business might consider Bad Things (or the absence of Good Things) and dismiss them as Not Bad Enough to prevent shipment of the product.

So: Testers find not only risks, but links between technical risk and business risk. Establishing and articulating those links are depend on the related skills of test framing and bug advocacy. Test framing is the set of logical connections that structure and inform a test. Bug advocacy is the skill of determining the meaning and significance of a bug, and reporting the bug in terms of potential risks and consequences that other people might have overlooked. Bug advocacy doesn’t mean jumping up and down and screaming until every bug—or even one particular bug—is fixed. It means providing context for your bug report, helping managers to understand and decide why they might to choose to fix a problem, right now, later, or never.

In my travels around the world and around the Web, I observe that some people in our craft have some fuzzy notions about risk. There are at least three serious problems that I see with that.

Tests are focused on (documented) requirements. That is, test strategies are centred around making sure that requirements are checked, or (in Agile contexts) that acceptance tests derived from user stories pass. The result is that tests are focused on showing that a product can meet some requirement, typically in a controlled circumstance in which certain stated conditions assumed necessary have been met. That’s not a bad thing on its own. Risk, however, lives in places where where necessary conditions haven’t been stated, where stated conditions haven’t been met, or where assumptions have been buried, unfulfilled, or inaccurate. Testing is not only about demonstrating that some instance of a requirement has been satisfied. It’s also about identifying things that threaten the successful fulfillment of that requirement. Testers find alternative ideas about risk.

Tests don’t get framed in terms of important risks. Many organizations and many testers focus on functional correctness. That can often lead to noisy testing—lots of problems reported, where those problems might not be the most important problems. Testers find ways to help prioritize risks.

Important risks aren’t addressed by tests. A focus on stated requirements and functional correctness can leave parafunctional aspects of the product in (at best) peripheral vision. To address that problem, instead of starting with the requirements, start with an idea of a Bad Thing happening. Think of a quality criterion (see this post) and test for its presence or its absences, or for problems that might threaten it. Want to go farther? My colleague Fiona Charles likes to mention “story on the front page of the Wall Street Journal” or “question raised in Parliament” as triggers for risk stories. Testers find ways of developing risk ideas.

James’ post will doubtless trigger more ideas about what testers find. Stay tuned!

P.S. I’ll be at the London Testing Gathering, Wednesday, April 6, 2011 starting at around 6:00pm. It’s at The Shooting Star pub (near Liverpool St. Station), 129 Middlesex St., London, UK. All welcome!

4 replies to “More of What Testers Find, Part II”

  1. Interresting post as always Michael.

    I´ve tried for a long time telling other testers about the importance of bug advocacy as a tool/skill for a tester.
    With good bug advocacy come great power however and it can be used for good and bad 😉
    So it has to be used with caution (and thus not cry wolf to often).

    Unfortunately there is the notion out there among testers sometimes that we shouldn´t “stoop to that level” and that a bug should be posted and then it is up to the project lead/defect board to recognize the importance of defects.

    Nothing could be further from the truth as our bugs has to “stand on its own” after we have pressed submit (lesson #58). Rarely will someone else make the argument for the bug than the bug report itself.

    Reply
  2. I agree to the line of your blog, but I see that documenting risks is also being used to waive responsibility. And to my opinion it should never be used that way as Test is as well responsible for the success of the project as any other discipline.

    Michael replies: I have two responses here. 1) What are you seeing or hearing to suggest that “documenting risks is also being used to waive responsibility?” Being used how? Where? By whom? What responsibility in particular? 2) Is Test really responsible for the success of the project “as well as any other discipline?” As much, as, say, the programmers who write the code and have the authority to change it? The designers who design the product? The managers who have the authority over schedule, budget, staffing, prioritization, product scope, bonuses, contracts, marketing choices, and so forth?

    As a tester, I definitely take responsibility for providing the most complete, relevant, and timely information I can to my clients, who include the people that I listed above. I take responsibility for the quality of my own work, certainly. To that degree, and to that degree only, testers share collective responsibility for success of the project. I can’t tell if you’re suggesting that (in which case I agree) or if you’re suggesting more than that (in which case I don’t).

    Therefore a documented risk should meet certain criteria:
    – It’s never a surprise to the client that this risk comes up because it has been analyzed and discussed prior to documenting it.
    – It should address a specific risk to the current or future projects. A general statement like it’s a risk that defects will be found in production is not specific.
    – A documented risk should have counter measures as well. If risk x occurs then these options are available to address that issue.

    I don’t understand the relevance of “documented” here. Do these points not apply to undocumented risks?

    You’re also stating that a problem is not a problem as long as a person is not affected by it; and of course this is true. But what I currently tend to see not only I my work as a tester that no one feels responsible for the organization or the society and although these exist of people the harm done to the society can be larger than the harm done to one individual. Would you agree to that?

    Organizations and societies are groups of people. Something that harms an organization or society harms people. So read “at least one person” for “a person” in this context.

    It would be a good thing when testers would advertise the knowledge they built it during a test track to the benefit of the project/client, and the technique of test framing can help them doing this in a professional way.

    Reply
  3. I have read several reports where testers didn’t/couldn’t execute testcases and mentioned them as risks. Without taken any action prior to that to make it work and then to my opinion mentioning risks is only being used to waive responsibility. And in that way you’re not contributing to the succes of a project.

    Michael replies: Unless I’m making some kind of gross mistake in interpreting what you’re saying, I disagree strongly. If the test can’t be accomplished as described, that’s a test result right there. There are a number of possible interpretations one could take when a test “doesn’t work”. Perhaps someone has misunderstood the intended behaviour of the product, and the test isn’t really a valid test. Fair enough, but what’s behind the misunderstanding? Why was someone designing or proposing an invalid test? That’s a reportable risk. Perhaps the test is a reasonable one, but has been misinterpreted by the person executing it. That misunderstanding represents a reportable risk. Perhaps the test is a great test, and the product has a problem that is preventing execution of the test. That’s a reportable risk. Perhaps there’s insufficient time for the kind of investigation we’d like, compared to the criticality of other tests that we’d like to do. That’s a reportable risk.

    Making a test work is not the goal in testing something. Understanding the test and our interpretation of the test result, and their relationship to how the product actually behaves are goals in testing something. Communicating information about quality and threats to quality—risks—are important aspects of testing. There are multiple kinds of risks, though: risks in the product, risks to the project, risks associated with the quality of the testing effort. Reporting on risk in a timely and useful way is, to me, a clear aspect of responsible technical work.

    Now, if you’re suggesting that a tester should investigate unexplained behaviour, well, sure. But there are many context factors that might contribute to a decision to report first, without getting into a detailed investigation. Thoroughness and timeliness are both valuable aspects of a test report. One might be much more important than the other.

    I agree to your opinion about the responsibility for succes especially because you state that you provide the most complete, relevant and timely information to the client.

    You can discuss the relevance of documented your question makes me think though, shouldn’t you always document risk? And therefore undocumented risks should exist.

    No, you should not always document risk. You’d be better off if you suggested that you should always communicate relevant risk.

    Communication doesn’t always require documentation. It typically requires conversation, but not even always that. For example, at the beginning of my classes, I often do this: I distribute five-by-seven index cards and markers to each participant. I get everyone’s attention by standing at the front of the room and waiting until all eyes are on me. Then I silently hold an index card in front of me, in a landscape orientation. Then I fold it in half along the landscape axis. Then, using a marker, I write my name on it. Everyone in the class does the same, and now there’s a name card in front of everyone. I have neither written nor spoken any instructions, but the desired outcome happens anyway. The communication has happened successfully.

    So you don’t always need documentation to communicate. You’re more likely to need documentation when you are separated in distance and/or time from the people with whom you’re communicating, but as a project manager, I preferred that people talked to me about risks. As we say in the Rapid Testing class, “That should be documented” really means “That should be documented by someone if and how and when it serves our purposes.” If we don’t consider the latter part of that sentence, there’s a good chance that we’ll waste time or effort.

    That’s the communication part. Now, with respect to risks, there are some risks that are not worth communicating or discussing. We could, for example, communicate the risk that the product will cease to work properly if the data centre is hit my a meteorite. That’s a risk, but it’s not a relevant risk.

    Although I don’t want to go into word games, I prefer the at least one person.

    I’m not sure what you mean here, so I’ll let that pass.

    Reply

Leave a Comment