Over the years, LinkedIn seems to have replaced comp.software.testing as the prime repository for wooly thinking and poorly conceived questions about testing.
Recently I was involved in a conversation with someone who, at least, seemed to be more articulate than most of the people on LinkedIn. Alas, I’ve since lost the thread, and after some searching I’ve been unable to find it. No matter: the points of the discussion are, to me, are still worth addressing. She was asking a question about how much time to allocate to writing test cases before starting testing, and I questioned the usefulness of doing that. The first part of her reply went like this:
I would like to point out that I doubt anyone wants to write something that they don’t need to write.
I agree that most people probably don’t want to write things they don’t need to write. But they often feel compelled, or are compelled to write things they don’t need to write.
I find value in writing test cases for a number of reason. One is that I train more junior engineers in testing and it is a good method to have them execute tests that I have written so they learn how a good test plan is put together.
If that were so, wouldn’t your junior engineers learn even more from writing test cases themselves, and getting feedback on their design and their writing? There’s a feedback loop in the design of a test, the execution of a test, the interpretation of a test result, and the learning that happens between them; wouldn’t it be a good idea to keep the feedback loop—and the learning—as rapid as possible? Wouldn’t your junior engineers learn still more from actually testing—under your close supervision, at first, and then with the freedom and responsibility to act more independently as they gain skill? You might want to have a look at this article: http://www.developsense.com/articles/2008-06-KnowWhereYourWheelsAre.pdf.
There’s a common misconception that testing happens in the characters of a written test case. It doesn’t. Testing happens in the mind and actions of the tester. It happens in the design of the test, in the execution of the test, in the observation and interpretation of the outcome of the test. Testing happens in the discovery of problems, in the investigation of those problems, and the learning about those problems and the product. At most, a fraction of this can be written down.
A test is far less something you execute, and far more a line of inquiry that you follow. To me, a good test case is idea-stuff; it’s a question that we want to ask of the program, based on some motivating idea about discovery or risk. In my observation, in writing test cases, people generally write down the least important stuff. They appear to be trying to program the tester’s actions, rather trying than to prime the tester’s thinking and observation.
Moreover, a test plan something quite different from a pile of test cases.
Secondly, [writing test cases] communicates the testing coverage with everyone involved in developing the software. If you are a contractor, this is very important since you want to leave with the client feeling like you did your job and they have the documentation to prove that they have done due diligence if they shop their company around or look for for VC money.
Were you, as a contractor, given the mission to produce test scripts specifically, or is that a mission that you have inferred? Bear in mind, I’ve been witness to many takeovers, as a program manager at a company where our senior managers were ambitiously acquiring technologies, products and companies, and as a consultant to several companies that were taken over by larger companies. In no case did anyone ever ask to see any test case documentation. Those who are investigating the company to be acquired typically don’t go to that level of detail. In my experience, alas, due diligence largely doesn’t happen at all. I’m puzzled, too, by the appeal to the least likely instances in which people might interact with test documention, rather than the everyday.
Meanwhile, there are many ways to communicate test coverage. (For example, see here, here, and here.) There are also many ways to fool yourself (and others) into believing that the more documentation the more coverage, or the more specific the documentation the more coverage—especially when that documentation is prospective, rather than retrospective. In Rapid Testing, we don’t encourage people to eliminate documentation. We encourage people to reduce documentation to what is actually necessary for the mission, and to eliminate wasteful documentation.
We focus on documenting test ideas concisely; on producing coverage outlines and risk lists that can be used to guide, rather than control a tester and her line of inquiry; on producing records of the tester’s thought process, observations, risk ideas, motivations, and so forth (see below). The goal is to capture things far more important than the tester’s mechanical actions. If someone wants a record of that, we recommend video capture software (tools like BB Test Assistant or Camtasia). An automatic log allows the tester to focus on testing the product and recording the ideas, rather than splitting focus between operating the product and writing about operating the product.
You can find examples of the kind of test documentation I’m talking about here, in the appendices to the Rapid Software Testing class, starting at page 47. Note that each example has varying degrees of polish and formal structure. Sometime it’s highly informal, used to aid memory, to frame a conversation, or trigger ideas. Sometimes it’s more formal and polished, when the audience is outside the test group. The over-riding point is to fit the documentation to the mission and to the task at hand.
More to come…
Want to know more? Learn about Rapid Software Testing classes here.