Blog Posts from April, 2018

How Long Will the Testing Take?

Friday, April 27th, 2018

Today yet another tester asked “When a client asks ‘How long will the testing take for our development project?’, how should I reply?”

The simplest answer I can offer is this: if you’re testing as part of a development project, testing will take exactly as long as development will take. That’s because effective, efficient testing is not separate from development; it is woven into development.

When we develop a software product or service, we build things: designs, prototypes, functions, modules, components, interfaces, integrated components, services, complete systems… There is some degree of risk—the possibility of problems—associated with everything we build. To be successful, each element of a system must fit with other elements of the system, and must satisfy the people who are using and building the system.

Despite our best intentions, we’re likely to make some mistakes and experience some misunderstandings along the way. If we review things as we go, we’ll eliminate many problems before we turn them into code. Still, there’s risk in using elements when we haven’t built, configured, operated, observed, and evaluated them. Unless we actually try to use what we’ve built, look for problem in it, and develop a well-grounded, empirical understanding of it, we risk fooling ourselves about how it really works—and doesn’t.

So it’s a good idea to start testing things as developer are creating or assembling them—and to test how they fit with other elements of the system—from the moment we start to build them or put them together.

Testing, for a given element or system, stops when we have a reasoned belief that there is no more development work to be done. Testing stops when people are satisfied that they know enough about the system and its elements to have discovered and resolved all the important problems. That satisfaction will be easier to establish relatively quickly, for any given element or system, when the product is designed to be easy to test. If you’ve been testing each part of the product deeply, investigating risk, and addressing problems throughout development, the whole product will be easier to test.

For the project to be successful, it’s important for the whole team to keep discovering and identifying ways in which people might be dissatisfied. That includes not only finding problems in the product, but also finding problems in our concept of what it might mean to be “done”. It’s not the tester’s job to build confidence in the product, but to reveal where confidence is unwarranted. That’s central to testing work, even though it’s difficult, disappointing, and to some degree socially disruptive.

Asking how long testing will take is an odd question for managers to ask, because it’s just like asking how long management will take. Management of a development project starts as development starts, and ends as development ends. Testing enables awareness about the product and problems in it, so that managers and developers can make decisions about it. So testing starts when the project starts, and testing stops when managers and developers have made their final decisions about it. Testing doesn’t stop until development and management of the project is done.

People often stop testing a product after it’s been put into production. Now: it might be a good idea to monitor and test some aspects of the system after it’s been put into production, too. (We call that live-site testing.) Live-site testing is often a very good idea, but like all other forms of testing, ultimately, it’s optional. Here’s one good reason to continue live-site testing: when you believe that there will be more development work done on the system.

So: development and management and testing go hand in hand. When a manager asks “How long will the testing take?”, it seems to me that the appropriate answer is “Testing will take just as long as development will take.” When people are satisfied that there’s no more important development work to do, they will also be satisfied that there’s no more important testing work to do either.

It’s important to remember, though: determining when people will be satisfied is something that we can guess, but cannot calculate. Satisfaction is a feeling, not a finish line.

Further reading:

How Long Will The Testing Take?
Project Estimation and Black Swans (Part 1)
Project Estimation and Black Swans (Part 5): Test Estimation

Very Short Blog Posts (35): Make Things Visible

Tuesday, April 24th, 2018

I hear a lot from testers who discover problems late in development, and who get grief for bringing them up.

On one level, the complaints are baseless, like holding an investigate journalist responsible for a corrupt government. On the other hand, there’s a way for testers to anticipate bad news and reduce the surprises. Try producing a product coverage outline and a risk list.

A product coverage outline is an artifact (a mind map, or list, or table) that identifies factors, dimensions, or elements of a product that might be relevant to testing it. Those factors might include the product’s structure, the functions it performs, the data it processes, the interfaces it provides, the platforms upon which it depends, the operations that people might perform with it, and the way the product is affected by time. (Look here for more detail.) Sketches or diagrams can help too.

As you learn more through deeper testing, add branches to the map, or create more detailed maps of particular areas. Highlight areas that have been tested so far. Use colour to indicate the testing effort that has been applied to each area—and where coverage is shallower.

A risk list is a list of bad things that might happen: Some person(s) will experience a problem with respect to something desirable that can be detected in some set of conditions because of a vulnerability in the system. Generate ideas on that, rank them, and list them.

At the beginning of the project or early as possible, post your coverage outline and risk list in places where people will see and read them. Update it daily. Invite questions and conversations. This can help you change “why didn’t you find that bug?” to “why didn’t we find that bug?”

Very Short Blog Posts (34): Checking Inside Exploration

Monday, April 23rd, 2018

Some might believe that checking and exploratory work are antithetical. Not so.

In our definition, checking is “the algorithmic process of operating and observing a product, applying decision rules to those observations, and reporting the outcome of those decision rules”.

We might want to use some routine checks, but not all checks have to be rote. We can harness algorithms and tools to induce variation that can help us find bugs. Both during development of a feature and when we’re studying and evaluating it, we can run checks that use variable input; in randomized sequences; at varying pace; all while attempting to stress out or overwhelm the product.

For instance, consider inducing variation and invalid data into checks embedded in a performance test while turning up the stress. When we do that, we can discover when the product falls to its knees under how much load—how the product fails, and what happens next. That in turn affords the opportunity to find out whether the product deals with overhead associated with error handling—which may result in feedback loops and cascades of stress and performance problems.

We can use checks as benchmarks, too. If a function takes significantly and surprisingly more or less time to do its work after a change, we have reason to suspect a problem.

We could run checks in confirmatory ways, which, alas, is the only way most people seem to use them. But we can also design and run checks taking a disconfirmatory and exploratory approach, affording discovery of bugs and problems. Checking is always embedded in testing, which is fundamentally exploratory to some degree. If we want to find problems, it would be a good idea to explore the test space, not just tread the same path over and over.