DevelopsenseLogo

Questioning Test Cases, Part 1

Over the years, LinkedIn seems to have replaced comp.software.testing as the prime repository for wooly thinking and poorly conceived questions about testing.

Recently I was involved in a conversation with someone who, at least, seemed to be more articulate than most of the people on LinkedIn. Alas, I’ve since lost the thread, and after some searching I’ve been unable to find it. No matter: the points of the discussion are, to me, are still worth addressing. She was asking a question about how much time to allocate to writing test cases before starting testing, and I questioned the usefulness of doing that. The first part of her reply went like this:

I would like to point out that I doubt anyone wants to write something that they don’t need to write.

I agree that most people probably don’t want to write things they don’t need to write. But they often feel compelled, or are compelled to write things they don’t need to write.

I find value in writing test cases for a number of reason. One is that I train more junior engineers in testing and it is a good method to have them execute tests that I have written so they learn how a good test plan is put together.

If that were so, wouldn’t your junior engineers learn even more from writing test cases themselves, and getting feedback on their design and their writing? There’s a feedback loop in the design of a test, the execution of a test, the interpretation of a test result, and the learning that happens between them; wouldn’t it be a good idea to keep the feedback loop—and the learning—as rapid as possible? Wouldn’t your junior engineers learn still more from actually testing—under your close supervision, at first, and then with the freedom and responsibility to act more independently as they gain skill? You might want to have a look at this article: http://www.developsense.com/articles/2008-06-KnowWhereYourWheelsAre.pdf.

There’s a common misconception that testing happens in the characters of a written test case. It doesn’t. Testing happens in the mind and actions of the tester. It happens in the design of the test, in the execution of the test, in the observation and interpretation of the outcome of the test. Testing happens in the discovery of problems, in the investigation of those problems, and the learning about those problems and the product. At most, a fraction of this can be written down.

A test is far less something you execute, and far more a line of inquiry that you follow. To me, a good test case is idea-stuff; it’s a question that we want to ask of the program, based on some motivating idea about discovery or risk. In my observation, in writing test cases, people generally write down the least important stuff. They appear to be trying to program the tester’s actions, rather trying than to prime the tester’s thinking and observation.

Moreover, a test plan something quite different from a pile of test cases.

Secondly, [writing test cases] communicates the testing coverage with everyone involved in developing the software. If you are a contractor, this is very important since you want to leave with the client feeling like you did your job and they have the documentation to prove that they have done due diligence if they shop their company around or look for for VC money.

Were you, as a contractor, given the mission to produce test scripts specifically, or is that a mission that you have inferred? Bear in mind, I’ve been witness to many takeovers, as a program manager at a company where our senior managers were ambitiously acquiring technologies, products and companies, and as a consultant to several companies that were taken over by larger companies. In no case did anyone ever ask to see any test case documentation. Those who are investigating the company to be acquired typically don’t go to that level of detail. In my experience, alas, due diligence largely doesn’t happen at all. I’m puzzled, too, by the appeal to the least likely instances in which people might interact with test documention, rather than the everyday.

Meanwhile, there are many ways to communicate test coverage. (For example, see here, here, and here.) There are also many ways to fool yourself (and others) into believing that the more documentation the more coverage, or the more specific the documentation the more coverage—especially when that documentation is prospective, rather than retrospective. In Rapid Testing, we don’t encourage people to eliminate documentation. We encourage people to reduce documentation to what is actually necessary for the mission, and to eliminate wasteful documentation.

We focus on documenting test ideas concisely; on producing coverage outlines and risk lists that can be used to guide, rather than control a tester and her line of inquiry; on producing records of the tester’s thought process, observations, risk ideas, motivations, and so forth (see below). The goal is to capture things far more important than the tester’s mechanical actions. If someone wants a record of that, we recommend video capture software (tools like BB Test Assistant or Camtasia). An automatic log allows the tester to focus on testing the product and recording the ideas, rather than splitting focus between operating the product and writing about operating the product.

You can find examples of the kind of test documentation I’m talking about here, in the appendices to the Rapid Software Testing class, starting at page 47. Note that each example has varying degrees of polish and formal structure. Sometime it’s highly informal, used to aid memory, to frame a conversation, or trigger ideas. Sometimes it’s more formal and polished, when the audience is outside the test group. The over-riding point is to fit the documentation to the mission and to the task at hand.

More to come…

7 replies to “Questioning Test Cases, Part 1”

  1. I’ve seen same issues here. However after awhile I realized why it’s happening – People don’t know any better. And people who know better can’t explain it enough to others.
    The logic of “write test case – execute test case – result pass/fail is testing” is much easier to understand thus people accept it more readily. And they believe it is better. That’s a problem, because it’s more matter of belief than matter of logical thinking. And changing beliefs is one of the hardest things to do.

    Reply
  2. Hi Michael!

    I am currently working on a project that puts my testing knowledge and experience on great test. I would like to thank you for following three links:
    http://www.developsense.com/articles/2008-10-GotYouCovered.pdf
    http://www.developsense.com/articles/2008-11-CoverOrDiscover.pdf
    http://www.developsense.com/articles/2008-11-AMapByAnyOtherName.pdf

    Those links are great help for me to cope with my testing challenge!

    Thank you very much for those resources!

    Regards, Karlo.

    Reply
  3. I wouldn’t say it’s a matter of belief and it’s not only matter of people not knowing any better. I think that testers themselves should be the ones explaining, selling themselves and their approaches to testing. Shouldn’t a Test Manager (or however you call it) be like a car salesman who loudly and convincingly tells why his car is the best and explains the reasoning behind it?
    If I buy a car I don’t know most of the things a car can do or not, but still I have expectations towards it. The things a car can do or not is explained to me by a car salesman in a fashion that I understand and he explain why the car suits exactly the needs that I have and the needs I do not know I have (ex. ABS, traction control, 7 airbags, etc…).
    Shouldn’t a tester do the same? Explain what info he can give by testing, how he can fit the real needs of clients, and what additionally testing can offer to him?
    So, for me the problem is also how testers communicate the things they and testing can do.

    Reply
  4. “The over-riding point is to fit the documentation to the mission and to the task at hand”

    A simple sentence often overlooked.

    As well, how you frame your dialog to your audience when you are talking about test results. That is, when I’m speaking with a development team, I speak and show items in a different way than I do with a sales or executive team. While developers oftentimes glean useful knowledge from detailed bug reports, those without intimate knowledge of the project will have glossy eyes if you are diving into bug reports.

    Reply
  5. And I should add, managers and executives do not want to see the details of test cases, they want an overall sign of the health of the system. We can make our job sound impressive by saying that we have executed thousands of documented tests, but the important point is that we can demonstrate that we have covered the system either through test cases or scenarios.

    Reply

Leave a Comment