Rapid Software Testing in Toronto, June 4-6

May 3rd, 2018

I don’t usually give upcoming events an entire blog post. I usually post upcoming events in the panel on the right. However, this one’s special.

I’m presenting a three-day Rapid Software Testing class, organized by TASSQ in Toronto, June 4-6, 2018.

Rapid Software Testing is focused on doing the fastest, least expensive testing that still completely fulfills testing’s mission: evaluating the product by learning about it through exploration and experimentation. Developers and managers—including those from organizations where there are few testers or none at all—are welcome and warmly invited. It’s the only public RST class being offered in Canada in 2018. Early bird pricing is available until Friday, May 4. Register here.

How Long Will the Testing Take?

April 27th, 2018

Today yet another tester asked “When a client asks ‘How long will the testing take for our development project?’, how should I reply?”

The simplest answer I can offer is this: if you’re testing as part of a development project, testing will take exactly as long as development will take. That’s because effective, efficient testing is not separate from development; it is woven into development.

When we develop a software product or service, we build things: designs, prototypes, functions, modules, components, interfaces, integrated components, services, complete systems… There is some degree of risk—the possibility of problems—associated with everything we build. To be successful, each element of a system must fit with other elements of the system, and must satisfy the people who are using and building the system.

Despite our best intentions, we’re likely to make some mistakes and experience some misunderstandings along the way. If we review things as we go, we’ll eliminate many problems before we turn them into code. Still, there’s risk in using elements when we haven’t built, configured, operated, observed, and evaluated them. Unless we actually try to use what we’ve built, look for problem in it, and develop a well-grounded, empirical understanding of it, we risk fooling ourselves about how it really works—and doesn’t.

So it’s a good idea to start testing things as developer are creating or assembling them—and to test how they fit with other elements of the system—from the moment we start to build them or put them together.

Testing, for a given element or system, stops when we have a reasoned belief that there is no more development work to be done. Testing stops when people are satisfied that they know enough about the system and its elements to have discovered and resolved all the important problems. That satisfaction will be easier to establish relatively quickly, for any given element or system, when the product is designed to be easy to test. If you’ve been testing each part of the product deeply, investigating risk, and addressing problems throughout development, the whole product will be easier to test.

For the project to be successful, it’s important for the whole team to keep discovering and identifying ways in which people might be dissatisfied. That includes not only finding problems in the product, but also finding problems in our concept of what it might mean to be “done”. It’s not the tester’s job to build confidence in the product, but to reveal where confidence is unwarranted. That’s central to testing work, even though it’s difficult, disappointing, and to some degree socially disruptive.

Asking how long testing will take is an odd question for managers to ask, because it’s just like asking how long management will take. Management of a development project starts as development starts, and ends as development ends. Testing enables awareness about the product and problems in it, so that managers and developers can make decisions about it. So testing starts when the project starts, and testing stops when managers and developers have made their final decisions about it. Testing doesn’t stop until development and management of the project is done.

People often stop testing a product after it’s been put into production. Now: it might be a good idea to monitor and test some aspects of the system after it’s been put into production, too. (We call that live-site testing.) Live-site testing is often a very good idea, but like all other forms of testing, ultimately, it’s optional. Here’s one good reason to continue live-site testing: when you believe that there will be more development work done on the system.

So: development and management and testing go hand in hand. When a manager asks “How long will the testing take?”, it seems to me that the appropriate answer is “Testing will take just as long as development will take.” When people are satisfied that there’s no more important development work to do, they will also be satisfied that there’s no more important testing work to do either.

It’s important to remember, though: determining when people will be satisfied is something that we can guess, but cannot calculate. Satisfaction is a feeling, not a finish line.

Further reading:

How Long Will The Testing Take?
Project Estimation and Black Swans (Part 1)
Project Estimation and Black Swans (Part 5): Test Estimation

Very Short Blog Posts (35): Make Things Visible

April 24th, 2018

I hear a lot from testers who discover problems late in development, and who get grief for bringing them up.

On one level, the complaints are baseless, like holding an investigate journalist responsible for a corrupt government. On the other hand, there’s a way for testers to anticipate bad news and reduce the surprises. Try producing a product coverage outline and a risk list.

A product coverage outline is an artifact (a mind map, or list, or table) that identifies factors, dimensions, or elements of a product that might be relevant to testing it. Those factors might include the product’s structure, the functions it performs, the data it processes, the interfaces it provides, the platforms upon which it depends, the operations that people might perform with it, and the way the product is affected by time. (Look here for more detail.) Sketches or diagrams can help too.

As you learn more through deeper testing, add branches to the map, or create more detailed maps of particular areas. Highlight areas that have been tested so far. Use colour to indicate the testing effort that has been applied to each area—and where coverage is shallower.

A risk list is a list of bad things that might happen: Some person(s) will experience a problem with respect to something desirable that can be detected in some set of conditions because of a vulnerability in the system. Generate ideas on that, rank them, and list them.

At the beginning of the project or early as possible, post your coverage outline and risk list in places where people will see and read them. Update it daily. Invite questions and conversations. This can help you change “why didn’t you find that bug?” to “why didn’t we find that bug?”

Very Short Blog Posts (34): Checking Inside Exploration

April 23rd, 2018

Some might believe that checking and exploratory work are antithetical. Not so.

In our definition, checking is “the algorithmic process of operating and observing a product, applying decision rules to those observations, and reporting the outcome of those decision rules”.

We might want to use some routine checks, but not all checks have to be rote. We can harness algorithms and tools to induce variation that can help us find bugs. Both during development of a feature and when we’re studying and evaluating it, we can run checks that use variable input; in randomized sequences; at varying pace; all while attempting to stress out or overwhelm the product.

For instance, consider inducing variation and invalid data into checks embedded in a performance test while turning up the stress. When we do that, we can discover when the product falls to its knees under how much load—how the product fails, and what happens next. That in turn affords the opportunity to find out whether the product deals with overhead associated with error handling—which may result in feedback loops and cascades of stress and performance problems.

We can use checks as benchmarks, too. If a function takes significantly and surprisingly more or less time to do its work after a change, we have reason to suspect a problem.

We could run checks in confirmatory ways, which, alas, is the only way most people seem to use them. But we can also design and run checks taking a disconfirmatory and exploratory approach, affording discovery of bugs and problems. Checking is always embedded in testing, which is fundamentally exploratory to some degree. If we want to find problems, it would be a good idea to explore the test space, not just tread the same path over and over.

Interview on Rapid Software Testing

March 27th, 2018

Not too long ago, I had a conversation with Joe Colantonio. Joe asks questions that prompt answers about the nature of testing, accountability, and telling the testing story.

Enjoy!

Very Short Blog Posts (33): Insufficient Information and Insufficient Time

March 19th, 2018

Here’s a question I get from testers quite a lot:

“What do I do when the developers give me something to test with insufficient information and time to test it?”

Here’s my quick answer: test it.

Here’s my practical answer: test it with whatever time and information you have available. (Testing is evaluating a product by learning about it through exploration and experimentation.) When your time is up, provide a report on what you have learned about the product, with particular focus on any problems you have found.

Identify the important risks and product factors of which you are aware, and which you have covered. (A product factor, or product element, is something that can be examined during a test, or that could influence the outcome of a test.) Identify important risks and product factors that you’re aware of and that you haven’t covered. Note the time and sources of information that you had available to you.

If part of the product or feature is obscure to you because you perceive that you have had insufficient information or time or testabilty to learn about it, include that in your report.

(I’ll provide a deep answer to the question eventually, too.)

Related posts:

How Is the Testing Going?
Testability
Testing Problems Are Test Results

Four (and More) Questions for Testers to Ask

March 11th, 2018

Testers investigate problems and risk. Other people manage the project, design the product, and write the code. As testers, we participate in that process, but in a special way and from a special perspective: it’s our primary job to anticipate, seek, and discover problems.

We testers don’t prevent problems; we don’t design or build or fix the product. We may help to prevent existing problems from going any farther, by discovering bugs, misunderstandings, issues, and risks and bringing them to light. With our help, the people who build and manage the project can address the problems we have revealed, and prevent worse problems down the line.

Over the last while, I’ve been working with clients that are “shifting left”, “going Agile”, “doing DevOps”, or “getting testers involved early”. Typically this takes the form of having a tester present for design discussions, planning meetings, grooming sessions, and the like.

This is usually a pretty good idea; if there is no one in the testing role, people tend not to think very deeply about testing—or about problems or risk. That’s why, even if you don’t have someone called “tester” on the team, it’s an awfully good idea to have someone in the testing role and the testing mindset. Here, I’ll call that person “tester”.

Alas, I’ve sometimes observed that, once invited to the meetings, testers are sometimes uncertain about what they’re doing there.

A while back, I proposed at least four things for testers to do in planning meetings: learning; advocating for testability; challenging what we’re hearing; and establishing our roles as testers. These activities help to enable sensemaking and critical thinking about the product and the project. How can testers do these things successfully? Here’s a set of targeted questions.

What are we building? Part of our role as testers is to come to a clear understanding of the system, product, feature, function, component, or service that we’re being asked to test. (I’ll say “product” from here on, but remember I could be referring to anything in the list.) We could be talking about the product itself or a representation of it. We could be looking at a diagram of it;reviewing a document or description of it; evaluating a workflow; playing with a prototype. Asking for any these can help if we don’t have them already. A beneficial side effect is helping to refine everyone’s understanding of the product—and how we’d achieve successful completion of the project or task.

So we might also ask: What will be there when we’ve built it? What are the bits and pieces? (Can we see a diagram?) What are the functions that the product offers; what should the product do? What gets input, processed, and output? (Do we have a data dictionary?) What does the product depend upon? What depends on the product? (Has someone prepared a list of dependencies? A list of what’s supported and what isn’t?)

For whom are we building it? If we’re building a product, we’re ultimately building it for people to use. Sometimes we make the mistake of over-focusing on a particular kind of user: the person who is immediately encountering the product, with eyes on screen and fingers on keyboard, mouse, or glass. Often, however, that person is an agent for someone else—for a bank teller’s application, think of the bank teller, but also think of the customer on the other side of the counter; the bank’s foreign exchange traders; the bank teller’s manager. Beyond using the product, there are other stakeholders: those who support it, connect to its APIs, test it, document it, profit from it, or defend it in court.

So we might also ask: Who else is affected by this product? Who do they work for, or with? What matters to them? (These questions are targeted towards operations value-related testability.) Who will support the product? Maintain it? Test it? Document it?

What could go wrong? The most important questions for testers to raise are questions about problems and risks. Developers, designers, business people, or others might discuss features or functions, but people who are focused on building a product are not always focused on how things could go badly. Switching from a builder’s mindset to a tester’s mindset is difficult for builders. For testers, it’s our job.

So we might also ask: What Bad Things could happen? What Good Things could fail to happen? Under what conditions might they happen or not happen? What might be missing? What might be there when it shouldn’t be there? And for whom are we not building this product—like hackers or thieves?

When something goes wrong, how would we know? Once again, this is a question about testability, and also a question about oracles. As James Bach has said, “software testing is the infinite art of comparing the invisible to the ambiguous to prevent the unthinkable from happening to the anonymous”. For any non-trivial program, there’s a huge test space to cover, and bugs and failures don’t always announce themselves. Part of our job is to think of the unthinkable and to help those invisible things to become visible so that we can find problems—ideally in the lab before we ship. Some problems might escape the lab (or our continuous deployment checks, if we’re doing that).

So we might also ask: How might we miss something going wrong? What do we need for intrinsic testability—at the very least, log files, scriptable interfaces, and code that has been reviewed, tested, and fixed as it’s being built. And what about subjective testability? Do we have the domain knowledge to recognize problems? What help might we need to obtain that? Do we have the specialist skills—in (for example) security, performance, or tooling—on the team? Do we need help there? If we’re working in a DevOps context,
doing live site testing or testing in production, how would we detect problems rapidly?

In sprint planning meetings, or design discussions, or feature grooming sessions, questions like these are important. Questions focused on problems don’t come naturally to many people, but asking such questions should be routine for testers. While everyone else is envisioning success, it’s our job to make sure that we’re anticipating failure. When everyone else is focused on how to build the product, it’s important for us to keep an eye on how the entire team can study and test it. When everyone else is creatively optimistic, it’s important for us to be pragmatically pessimistic.

None of the activities in planning and review replace testing of the product that is being built. But when we participate in raising problems and risks early on, we can help the team to prevent those problems—including problems that make testing harder or slower, allowing more bugs to survive undetected. Critical thinking now helps to enable faster and easier testing and development later.

Now a word from our sponsor: I help testers, developers, managers, and teams through consulting and training in Rapid Software Testing (RST). RST is a skill set and a mindset of testing focused on sharpening critical thinking, eliminating waste, and identifying problems that threaten the value of the product or the project, and the principles can be adapted to any development approach. If you need help with testing, please feel free to get in touch.

Signing Off

March 1st, 2018

Testers ask: “I’m often given a product to test, but not enough time to test it. How am I supposed to sign off on the release when I haven’t tested enough?”

My reply goes like this:

If you’re a tester, it seems profoundly odd to me that you are responsible for signing off on a release. The decision to release a product is a business decision, not a technical one. The decision is, of course, informed by technical considerations, so it’s entirely reasonable for you to provide a report on those considerations. It would be foolish for for someone in a business role to ignore your report. But it would be just as foolish for a someone in a business role to abdicate the business decision to technical people. We serve the business; we don’t run it, and technical people often aren’t privy to things in the business’ domain.

The idea that testers can either promote or prevent a release can be tested easily. Try refusing to sign off until you’re entirely happy with what you know about the product, and with the extent of what you know. You’ll get a fairly quick result.

Perhaps managers will go along with you. You’ll test the product until you are satisfied that you have covered everything important, that you have discovered all the significant problems, that those problems have been fixed, and that the discovery of any more problems that matter would be a big surprise to you. Your client (that is, the person to whom you report) will give you all the time and all the resources you need until the product meets your standards. Yes, that seems unlikely to me, too.

More likely, at some point you will be overruled. Management will decide that your concerns are not important enough to block the release. Then you will be able to say, “So, the decision to approve or prevent the release is not really my decision? Cool.” Then, perhaps silently, “I’m glad that’s your job after all. I’m happy not being a fake product owner.”

Without a product owner’s authority (and title, and salary), I’m not willing to sign—or even make—a go/no-go decision. That decision is not my responsibility, but the responsibility of people with the authority to make it. If they ask for my recommendation, I’ll provide a list of problems that I know about, and any reasons to believe that there might be more problems lurking. Then I will politely recommend that they weigh these against the needs and objectives of everyone else involved—development, operations, support, sales, marketing, finance, and so forth.

So what if someone asks you to sign something? I am willing to sign an account of what I know about the status of the product, and what I’ve done to learn about it. That account can be fairly concise, but in expanded form, it will look like this:

I typically start with “I’ve tested the product, and I’ve found these problems in it.” I then provide a list of a few problems that I believe to be most important to inform management’s decisions. For the rest, I’ll provide a summary, including patterns or classes of general problems, or pointers to problem reports. My special focus is on the problems; as the newspaper reporters will tell you, “if it bleeds it leads”. I’ll welcome requests for more information. If there’s any doubt about the matter, I emphasize that the decision to ship or not rests with the person responsible for making the release decision.

(Some people claim “the whole team decides when to release and when not to”. That’s true when the whole team agrees, or when disagreements are tractable. When they’re not, in every business situation I’ve been in, there is a single person who is ultimately responsible for the release decision.)

If I haven’t found any problems—which is rare—I won’t sign anything claiming that there are no problems in the product. I’m willing to assert that I’m not aware of any problems. I cannot responsibly say that there are no problems, or that I’m capable of finding all problems. To say that there are no problems is only an inference; to say that I’m not aware of any problems is a fact.

Whether I’ve found problems or not, I’m willing to sign a statement like this: “I have covered these conditions to this degree, and I have not covered these other conditions.” The conditions include specific product factors and quality criteria, like those found in the Heuristic Test Strategy model, or others that are specific to the project’s context. This gives warrant to my statement that there are problems (or that I’m not aware of them), and identifies why management should be appropriately trusting and appropriately skeptical about my evaluation. For an informed release decision, management needs to know about things I haven’t covered, and my perception of the risk associated with not covering them.

Happy news about the product might be worth mentioning, but it takes second place to reporting the problems and risks. I want to make sure that any concerns I have are prominent and not buried in my report.

I’m also willing to sign a statement saying “Here are some of the things that helped me, and here are some of the things that didn’t help; things that slowed my testing down, made it more difficult, reduced the breadth and depth of coverage I was able to obtain.” Whether I sign such a statement or not, I want to make sure I’ve been heard. I also want to offer some ideas that address the obstacles, and note that with management help, maybe we can reduce or remove some of them so that I can provide more timely, more valuable coverage of the product. When I can do that, I can find deeper, or more subtle, or more intermittent, and possibly more dangerous bugs.

Of course, I don’t run the project. There may be business considerations that prevent management from helping me to address the obstacles. If I’ve been heard, I’ll play the hand I’ve been dealt; I’ll do my best to address any problems I’ve got, using any resources I can bring to bear. It’s my job to make management aware of any risks associated with not dealing with the obstacles—on paper or in a quick email, if I’m worried about accountability. After that, decisions on how to manage the project belong with management.

In other words: I’m prepared to sign off on a three-part testing story. As a tester, I’m prepared to accept responsibility for my story about the quality of the product, but the product does not have to adhere to my quality standards. I’ll sign off on a report, but not on a decision. The business doesn’t need my permission to ship.

Finding the Happy Path

February 28th, 2018

In response to yesterday’s post on The Happy Path colleague and friend Albert Gareev raises an important issue:

Until we sufficiently learned about the users, the product, and the environment, we have no idea what usage pattern is a “happy path” and what would be the “edge cases”.

I agree with Albert. (See more of what he has to say here.) This points to a kind of paradox in testing and development: some say that we can’t test the product unless we know what the requirements are—yet we don’t know what many of the requirements are until we’ve tested! Testing helps to reduce ambiguity, uncertainty, and confusion about the product and about its requirements—and yet we don’t know how to test until we’ve tried to test!

Here’s how I might address Albert’s point:

To test a whole product or system means more than demonstrating that it can work, based on the most common or optimistic patterns of its use. We might start testing the whole system there, but if we wanted to develop a comprehensive understanding of it, we wouldn’t stop at that.

On the other hand, the whole system consists of lots of sub-systems, elements, components, and interactions with other things. Each of those can be seen as a system in itself, and studying those systems contributes to our understanding of the larger system.

We build systems, and we build ideas on how to test them. At each step, considering only the most capable, attentive, well-trained users; preparing only the most common or favourable environments; imagining only the most optimistic scenarios; performing only happy-path testing on each part of the product as we build it; all of these present the risk of misunderstanding not only the product but also the happy paths and edge cases for the greater system. If we want to do excellent testing, all of these things—and our understanding of them—must not only be demonstrated, but must be tested as well. This means we must do more than creating a bunch of high-level, automated, confirmatory checks at the beginning of the sprint, and then declaring victory when they all “pass”.

Quality above depends on quality below; excellent testing above depends on excellent testing below. It’s testing all the way down—and all the way up, too.

Very Short Blog Posts (32): The Happy Path

February 26th, 2018

“Happy path testing” isn’t really testing at all. Following the “happy path” is a demonstration.

Here’s the role demonstration plays in testing: it’s nice to know that your product can achieve the happy path before you start to test it. To the degree a demonstration is a test, it’s a very shallow test.

If you’re building something new and non-trivial that matters to people, or that could harm people, there’s a risk that you might not entirely understand it or what it affects. To develop your understanding, you’ll probably want to test it; to learn about it; to investigate it; to interact with it directly; to probe it with tools; to stress it out. You’ll probably want to explore it and experiment with it; to evaluate it. That’s testing.

If you can’t even achieve the happy path, you’re not ready for testing.

Related posts:
Finding the Happy Path
Testing and Checking Refined
Why Checking is Not Enough
Acceptance Tests: Let’s Change the Title, Too
More of What Testers Find
Why We Do Scenario Testing