DevelopsenseLogo

Worthwhile Documentation

In the Rapid Software Testing class, we focus on ways of doing the fastest, least expensive testing that still completely fulfills the mission. That involves doing some things more quickly, and it also involves doing other things less, or less wastefully. One of the prime candidates for radical waste reduction is documentation that’s incongruent with the testing mission.

Medical device projects typically present a high degree of risk. Excellent testing helps teams and product owners to identify risks and problems in the product. The quality of testing is a function of the skill of the tester; one would not set loose an incapable tester on high-risk project. Yet some managers have told me that they commission people to write test documentation in a particular style. That style is, to me, overly elaborate and specific with respect to actions to perform and observations to make. Yet at the same time, that style is remarkably devoid of ideas about motivation or risk.

I sometimes ask managers why they use this style of instruction. They usually answer, “because we want anyone to be able to walk up to this system and test it.”

“Anyone?” I ask. “Why anyone?”

“You know how it is. If we have to test a new revision of this program a year from now, there’s a good chance that we won’t have the same testers.” (Dude. If you’re inflicting on your staff the idea of testing as writing or following instructions for an automaton, I might have an explanation for you.)

“Anyone?” I ask. “How about a cat?”

“Well, Michael, that’s silly. Cats can’t think. Cats can’t read.”

“How about my daughter? She’s seven, and she can read well enough to read that. And she could follow the steps pretty well, too.”

“We don’t hire children here!”

“Okay,” I offer. “Would you hire a completely incompetent tester who needed to be told absolutely everything, in painful detail?”

“We wouldn’t hire anyone like that.”

“Fair enough, and I’d hope not. So, why do you insist that people write instructions for them that way?

Let me be clear: when the situation calls for skilled testers, you don’t need overly specific instructions for them. On the other hand, if you don’t have skilled testers, you’ve got a problem that scripted testing won’t be able to solve.

Here’s a splendid example of a machete that we believe that managers could use to cut through jungles of waste. In a recent project that involved work with FDA-regulated medical devices, James Bach found a huge number of excruciatingly overspecified, low-value test cases aimed at “anyone”. The following two paragraphs replaced 50 pages of waste.

3.0 Test Procedures

3.1 General Testing Protocol

In the test descriptions that follow, the word “verify” is used to highlight specific items that must be checked. In addition to those items, the tester shall at all times be alert for any unexplained or erroneous behaviour of the product. The tester shall bear in mind that, regardless of any specific requirement for any specific test, there is the overarching general requirement that the product shall not pose an unacceptable risk of harm to the patient, including an unacceptable risk due to reasonably foreseeable misuse.

Test personnel requirements: The tester shall be thoroughly familiar with the Generator and Workstation Function Requirement Specifications, as well as the working principles of the devices themselves. The tester shall also know the workings of the power test jig and associated software, including how to configure and calibrate it and how to recognize it is not working correctly. The tester shall have sufficient skill in data analysis and measurement theory to make sense of statistical test results. The tester shall be sufficiently familiar with test design to complement this protocol with exploratory testing in the event that anomalies appear that require investigation. The tester shall know how to keep test records to a credible professional standard.

To me, that’s something worth writing down. Follow those instructions, and your team will save time, save work, and put the emphasis in the right places: on risk, and on meeting and mitigating that risk with skills.

9 replies to “Worthwhile Documentation”

  1. And there are some other reasons for not writing over-detailed test steps:
    – It’s cost-expensive;
    – It’s hard to maintain;
    – Even following a rather-detailed script, different testers will tend to find different kinds of bugs or discovring different kinds of information; it’s true even for the same tester while testing at different time;
    – However detailed your testing scripts are, they are only a small part of the whole worth-to-test part, so don’t count on those existed test scripts too much. They are not the only valuable test cases.

    Reply
  2. Unfortunately, las week I was in a situation when I was to write loads of waste (documenting automated test cases, amongst other things…). I was trying to explain to my boss why this task was useless, and at the end, after so many facts, the reply was “because the client wants it and thats it”. With all of the common “good sides of oversized test specification document” along the way.

    So at the end I just decided to give it up, write down what I considered the least and the best possible, given the task. I guess sometimes one just has to build up some more influence in order to change things and then try trimming this again.

    Michael replies: If I were in your situation, I’d create one more piece of documentation: a lightweight journal. In it, I’d make sure to track the hours that I spent in documentation and the discoveries I made; then compare that with the time spent focused on investigating the product and the discoveries made.

    BTW, while I was bussy doing documentation, a bug was discovered. When asked what I thought about it I said “dunno, I was documenting, not testing”.

    Did you discover the bug, or did someone else? If you discovered the bug, you might like to point out that you found the bug without using the documentation that you were generating. If someone else found the bug, they probably weren’t using your documentation either.

    But I’d also point out a missed opportunity. You could have said, “I was testing while I was documenting. That’s pretty much inevitable. The trouble is, I wasn’t focused on testing. Imagine the problems that I could have found had I been focused on testing!”

    Reply
  3. Right to the point,
    Now all that is left, is to find the right blend which will still:
    1. Verify & Validate the Requirements document, while writing.
    2. Enable other stakeholders brainstorming of expected tests while they review our document.
    3. Specify just enough, that things which must not be neglected, will be tested.

    I tried to write only Test Names and purposes, it served for #1 relatively well,
    But developers couldn’t find flaws or suggest enough missing tests in the pre-testing stage based on it.
    I did find out that elaborating One Test Case example per test, with all details of parameters which might be used, enabled Me to improve my focus, and more than that – enabled our developers to connect to what I planned, and bring much more feedback.

    Another thing which helped, was a tabular presentation of expected testing scope, by defining the Dimension as parameters, and after these are agreed on, elaborate their possible values (preferably while marking them with 3 level priority)

    Kobi

    Michael replies: Thanks for writing, Kobi, and for telling us a bit about your experience.

    I suggest that you continue doing what you’re doing: little experiments that produce some results and some experience. If people like the effects, great; if some people have problems, tune things until they work reasonably well for everyone.

    One thing I’d be careful about: verifying and validating is a reasonable focus for programmers, but testers have to go much, much farther by thinking critically about risks. Try this: for each story or example or element of a description, try adding

    “…unless…”
    “…but maybe…”
    “…and also…”
    “…except when…”
    “…although sometimes…”

    (Hmmm… I smell another blog post coming. Thanks for prompting the idea.)

    Reply
  4. Nice idea about a journal. Will try that for the next delivery.

    The bug was discovered by someone else, doing something else – obviously not following the documentation I wrote. The point is that we are 90% sure nobody will ever check or follow the documentation I am writing. It’s recognized as a writing for the sake of writing, as a way to show we test. And we don’t, since we are bussy writing. Written test specs are no proof of testing done, they are just the proof that the Word compiles. Just takes a bit of common sense and brains to understand that, but seems there is some serious brainwashing being done in the industry.

    Reply
  5. It’s funny cause this is exactly the way we test! We write test scripts that testers must follow.

    Michael replies: Test? This is exactly the way you confirm presuppositions, which is a very weak form of testing.

    These test scripts are written in a standard way, this will reduce the overhead of training a new person to test the system.

    Compared to having them interact with the system with minimal guidance and coaching? Educational research, my experience in teaching people, and my own experience in learning suggests the opposite. People don’t learn very well or very quickly from following scripts that other people wrote. They appear to be able to accomplish the task, but the script remains in charge. If you want people to learn, you have to give them problems to solve and things to explore, not steps to follow. This study refers to kids, but the findings are consistent with what we observe in adults too.

    The downside to this system is that it doesn’t catch all the bugs, but it does catch most of them (or the big ones)…

    …as far as you know, now, in your environment.

    Here are a couple of things to try. One, try a few chartered and well-managed sessions using an exploratory approach. Give testers coaching and feedback, and give them time to practice and develop the skills. Two, observe and debrief your testers. Find out what they discovered by following the script, versus how much they learned by deviating from it.

    Reply
  6. I really like this concept. I currently work in medical device company and see this waste on an almost daily basis. Has your suggested approach been run past a Regulatory Affairs group in a medical device company? Have you consulted anyone with FDA experience?

    I’ve worked with imaging companies that recognize no contradiction between what I advocate (skilled testing and appropriately concise documentation) and what the FDA requires. The key is that the FDA’s assessment is based on what it calls the “pivotal study”, which is something that I would call a demonstration.

    There is a growing sub-group in the context-driven testing community that has experience in exactly this. Griffin Jones (@griff0jones on Twitter), Ben Yaroch (whom I believe to be a board member of the Association for Software Testing as of this writing), and Fredrik Ryberg in Sweden are among the people I know to be leading this. The context-driven testing mailing list (http://groups.yahoo.com/group/software-testing) is a good place to connect with such people. James has done extensive work in this area with a client for almost two years now, and has given conference talks on the subject. If we ask him nicely, he may post some of them in more public places.

    Given the heavyweight approach I have lived over the last 4 years I feel like there would be a lot of push back from the groups that present the processes and subsequent proof of adherence to a Regulating Body (i.e. the FDA).

    The FDA is not opposed to exploratory testing. In fact, as a birthday present to me this year, the FDA produced a draft guidance document (see Section 5) that James describes here The keys are to work collaboratively with your auditor, and to note that in general, the FDA wants to see a demonstration that your product works. But to me, that is at best like confirmatory testing. If you want to avoid lawsuits, or killing or injuring people, or messing with their data, demonstration won’t do; you’re going to have to test.

    Reply
  7. “…it doesn’t catch all the bugs, but it does catch most of them…”
    How do you know it catches most of them? Have you got a sheet somewhere that details all the bug and you can cross-reference it against the ones you found? How do you know that ‘the big ones’ have all been found?
    Wouldn’t it be better, if you want to reduce the overheads of training someone, to actually write some training documentation instead?

    Michael replies: Yes—and wouldn’t it be even better to train them? There’s a blog post coming about tacit knowledge, but let me summarize: you can give people documentation, but they will learn much more quickly and much more deeply if you give them experiences. Just as with testing, when it comes to training, I’d emphasize loops of interaction with products and trainers, and de-emphasize documentation that’s not helpful and powerful.

    Thanks for bringing this up, Joe.

    Reply
  8. Let me be the devil’s advocate for a post…

    Michael replies: This comment triggered a reply that was long enough for its own blog post. You can read it here.

    Reply

Leave a Comment