DevelopsenseLogo

Breaking the Test Case Addiction (Part 12)

In previous posts in this series, I made a claim about the audience for a test report:

They almost certainly don’t want to know about when the testing is going to be done (although they might think they do).

It’s true that managers frequently ask testers when the testing will be done. That’s a hard question to answer, but maybe not for reasons that you—or they—might have considered.

By definition, testers who are working for clients do not work independently. We are providing services to our clients. We gain experience with the product, explore it and experiment with it so that that our clients can determine the status of the product. Knowledge of the status of the product allows our clients to decide whether product is ready to ship, or whether there is more development work to do.

Whatever testing we may have performed, we could always perform more; but once the client decides more development work won’t be worthwhile, development stops, and testing stops along with it. (At least, pre-release testing stops. Live-site monitoring and other forms of information gathering begin when the product is released, presenting an opportunity for learning about the quality of the product and about the quality of the testing that’s been done on it. Sometimes that learning comes with a big price tag.) The real question on the table, then, is not when testing work will be done, but when the development work will be done.

So, brace yourself: the fact is that no one really cares when testing will be done, because testing is never done; it only stops. Testing stops when the client determines that there is no more development work worth doing. The client—not the tester—decides when development is done. And how does the client decide that?

The client decides based on economics, reasoning, politics, and emotion. This is a complex decision, and here comes a long sentence that illustrates just how complex the decision is.

The client will decide to ship the product when she believes that

  • she knows enough about the product, the actual known problems about it, and the potential for unknown problems about it, such that…
  • the product provides sufficient benefits—that is, the product will help its users to accomplish a task, or some set of tasks; and
  • the product has a sufficiently small number of known bad problems about it; and
  • the product is sufficiently unlikely to have unknown bad problems; and
  • more development work—adding new features and fixing problems—will not be worthwhile, because
  • the benefits from the product outweigh the known problems to a sufficient degree that customers will obtain the value they want; and
  • the business can deal with known problems about the product, sufficiently inexpensively for the business to sustain the product and the business; and
  • the business can deal with whatever unknown problems may still exist; and
  • the client will not be in political trouble with her social group (including the team, management, and society at large) if she turns out to be wrong about any of all this; and
  • she feels okay about all of these things.

So when will testing be done? The client can declare testing to be done at any moment when the client is satisfied that all of these conditions have been fulfilled. So when the client asks “When will testing be done?”, that question amounts to “When will I be satisfied that development work is done?” And how can you, the tester, predict when someone else will be satisfied by work being done by other people?

You can’t. So I would recommend that you don’t, and that you don’t try. Instead, I’d suggest that you negotiate your role and your commitments. At first, this may look like a long conversation.

Try something like this:

“I understand that you want to know when testing will be done, because you want to know when development will be done; that is, when you will be satisfied that the product is ready to ship. I don’t know how to make a reliable prediction about when you will be satisfied, but here’s something that I can propose in return.

“I will start testing right now; that is, I will start obtaining experience with the product, exploring it, performing experiments on it, analyzing it. I’ll learn rapidly about the technology, the clients for the product, and the contexts in which the product will be used. As a tester, my special focus will be on evaluating it like a good critic; finding problems that threaten the value of the product to people who matter—especially you.

“Things will tend to go better if I’m able to help find problems early on—in the design of the product, or in our understanding of how its users might get value from it, or in the context that surrounds it. I don’t presume to be the manager or designer of the product, but I may have some suggestions for it—especially in terms of how to make the product more practically testable.

“As the product is being built, I’ll work closely with you and with the developers to help everyone make sure that the product we’re building is reasonably close to the product we think we’re building. The testing we need for that tends to be relatively shallow, focusing on quick feedback that doesn’t slow down or interrupt the pace of development. I’d recommend that you give the developers time and support to do their work in a disciplined way, as good craftspeople do. That discipline includes review, testing, and checking their work as they go, so that easy-to-find problems don’t get buried and cause trouble for everyone later. I can offer help with that, to the degree that the developers welcome it.

“The more that the developers can cover that quick, shallower testing, the more I’ll be able to focus on deep testing to find rare, hidden, subtle, intermittent, platform-dependent, emergent, elusive problems that matter. Deep testing requires a different mindset from the builder’s mindset, and changing mental gears to do deep testing can really disrupt the developers’ flow. So I’ll try to do deep testing as much as I can in parallel with the shallower testing that the developers are doing all the way along.

“At every step, I’ll let you know about any problems that I see in the product. I’ll be giving you bug reports, of course. I’ll also let you know about how the testing is going—what has been covered and what hasn’t. I’ll use coverage outlines in some form to help illustrate that, and I’m happy to offer you a variety of formats for them so you can choose one that works for you.

“If I notice a lot of bugs that seem like they should have been easy to find, I’ll let you know right away. For one thing, when there are lots of shallow bugs, deep testing becomes harder and slower, because I’m obliged to pause to investigate and report those bugs. More significantly, though, lots of shallow bugs might indicate that the developers are working too fast, or are under too much pressure. When people are pressed, they tend to have a hard time maintaining discipline and mental control over their work. In software, that’s a Severity 0 project risk; it leads to bugs, and some of those bugs may be deep enough that they’ll get past us—especially if we’re investigating and reporting the shallower bugs.

“I’m prepared to test or review anything you give me at any time; I’ll let you know how that influences the pace of other work that you’ve asked me to do.

“If there is testing that must be done formally—that is, in a specific way, or to check specific facts—I can certainly do that. I’ll provide you (and the auditors, if necessary) with evidence to support claims about all of the testing that has been done, both formal and informal. I’ll also let you know about extra costs associated with formal work—the time and effort it takes—and how it might affect our ability to find problems that matter.

“Apropos of that, I’ll keep track of anything that might threaten the on-time, successful completion of whatever work we’re doing. If you like, I’ll help to maintain product and project risk lists. (I’d recommend that the project manager be responsible for those, though.)

“I’ll keep track of where my own time is going, so that I’ll be able to produce a credible account of anything that is slowing down my work or making it harder. I’ll let you know what I need or recommend to make testing go as quickly and as easily as possible, and I invite you to ask for anything that helps make the product status or the testing work more legible—visible, readable, or understandable—to you.

“My goal is to help you to be immediately aware of everything you need to know to anticipate and inform a shipping decision.

“I know that this doesn’t directly answer the question of when testing will be done; but testing ends when we know the development work is done. So perhaps the best thing is for us to go together to the designers and developers. You can ask them when they anticipate that the development work will be done, and when the problems we encounter along the way will be fixed. I will help them to identify problems and risks, and to remember to include time and resources for testability as they give their estimate. As we’re working together to build and test the product, we can develop and refine our understanding about it, and we can be continually aware of its status. When that’s the case, you’ll be able to decide quickly whether there’s more development work to do, or whether you believe the product is ready for release.”

That’s a fairly thorough description of testing work. It’s a pretty long statement, isn’t it? Reading it aloud takes me just over five minutes. In real life, it would probably be interrupted by questions from time to time, too. So let’s imagine that the whole conversation might take 15 minutes, or even half an hour. But let me leave this post—and this series of posts—with these questions:

In a project that can take weeks or months, wouldn’t one relatively short conversation describing the testing role and affirming the tester’s commitments be worthwhile?

In that thorough description of testing work, did you notice that the expression “test cases” didn’t come up?

Want to learn how to observe, analyze, and investigate software? Want to learn how to talk more clearly about testing with your clients and colleagues? See the full schedule, with links to register here.

5 replies to “Breaking the Test Case Addiction (Part 12)”

  1. Thanks, I have enjoyed this series tremendously. But I wanted to comment here about one contextual detail about stopping testing.

    The testing might not stop when the development (presumably) stops to enable shipping. In that phase, the delivery process testing or maybe “productization testing” begins, and all too often many things: aspects of the product (quality dimensions); are tested meaningfully for the first time. And meaningfully here means in the way that actually brings valuable information to actually complete the release. It’s often different work from the building and integrating during development. And, any of the people doing this work, rarely identify it as testing or themselves as testers. They have plans, but don’t have many of the testing skills that would help do that. I know it’s context-dependent and not an issue in some circumstances, but in some others (not to riddle, I say I think about embedded products), a can of worms to be questioned. Of course, it should be possible to cover also these types of testing earlier but it’s often either too expensive or e.g. resource-wise unfeasible – that doesn’t say something can’t be done to make it less expensive or more feasible.

    Michael replies: Thank you for the comment.

    There are different ways of looking at this, for sure. My perspective is that testing is always happens with the intention of informing a decision about something yet to be decided, or yet to be done; so testing relative to that decision stops when that thing is done. More testing, relative to the next set of decisions, begins. Testing, like everything else, is subject to The Unsettling Rule: nothing is ever settled.

    If people are doing live-site testing (which is our term for it in the Rapid Software Testing namespace), I agreem that it would be well for them to have the skills (critical thinking and analytical skills) and the tools to do it.

    Apropos of that, it would be a good idea to remember that “too expensive” or “too resource-intensive” are not properties of things; they’re relationships between one thing and other things. That means it’s important to talk about cost, value, and risk associated with our decisions.

    Reply

Leave a Comment