Blog Posts for the ‘Strategy’ Category

To Go Deep, Start Shallow

Wednesday, October 13th, 2021

Here are two questions that testers ask me pretty frequently:

How can I show management the value of testing?
How can I get more time to test?

Let’s start with the second question first. Do you feel overwhelmed by the product space you’ve been assigned to cover relative to the time you’ve been given? Are you concerned that you won’t have enough time to find problems that matter?

As testers, it’s our job to help to shine light on business risk. Some business risk is driven by problems that we’ve discovered in the product—problems that could lead to disappointed users, bad reviews, support costs… More business risk comes from deeper problems that we haven’t discovered yet, because our testing hasn’t covered the product sufficiently to reveal those problems.

All too often, managers allocate time and resources for testing based on limited, vague, and overly optimistic ideas about risk. So here’s one way to bring those risk ideas to light, and to make them more vivid.

  • Start by surveying the product and creating a product coverage outline that identifies what is there to be tested, where you’ve looked for problems so far, and where you could look more deeply for them. If you’ve already started testing, that’s okay; you can start your product coverage outline now.
  • As you go, develop a risk list based on bugs (known problems that threaten the value of the product), product risks (potential deeper, unknown problems in the product in areas that have not yet been covered by testing), and issues (problems that threaten the value of the testing work). Connect these to potential consequences for the business. Again, if you’re not already maintaining a risk list, you can start now.
  • And as you go, try performing some quick testing to find shallow bugs.

By “quick testing”, I mean performing fast, inexpensive tests that take little time to prepare and little effort to perform. As such, small bursts of quick testing can be done spontaneously, even when you’re in the middle of a more deliberative testing process. Fast, inexpensive testing of this nature often reveals shallow, easy-to-find bugs.

In general, in a quick test, we rapidly an encounter some aspect of the product, and then apply fast and easy oracles. Here are just a few examples of quick testing heuristics. I’ve given some of them deliberately goofy and informal names. Feel free to rename them, and to create your own list.

Blink. Load the same page in two browsers and switch quickly between them. Notice any significant differences?
Instant Stress. Overload a field with an overwhelming amount of data (consider PerlClip, BugMagnet or similar lightweight tools; or just use a text editor to create a huge string by copying and pasting); then try to save or complete the transaction. What happens?
Pathological Data. Provide data to a file that should trigger input filtering (reserved HTML characters, emojis…). Is the input handled appropriately?
Click Frenzy. Click in the same (or different) places rapidly and relentlessly. Any strange behaviours? Processing problems (especially at the back end)?
Screen Survey. Pause whatever you’re doing for a moment and look over the screen; see anything obviously inconsistent?
Flood the Field. Try filling each field to its limits. Is all the data visible? What were the actual limits? Is the team okay with them—or surprised to hear about them? What happens when you save the file or commit the transaction?
Empty Input. Leave “mandatory” fields empty. Is an error message triggered? Is the error message reasonable?
Ooops. Make a deliberate mistake, move on a couple of steps, and then try to correct it. Does the system allow you to correct your “mistake” appropriately, or does the mistake get baked in?
Pull Out the Rug. Start a process, and interrupt or undermine it somehow. Close the laptop lid; close the browser session; turn off wi-fi. If the process doesn’t complete, does the system recover gracefully?
Tug-of-War. try grabbing two resources at the same time when one should be locked. Does a change in one instance affect the other?
Documentation Dip. quickly open the spec or user manual or API documentation. Are there inconsistencies between the artifact and the product?
One Shot Stop. Try an idempotent action—doing something twice that should effect a change the first time, but not subsequent times, like upgrading an account status to the top tier and then trying to upgrade it again. Did a change happen the second time?
Zoom-Zoom. Grow or shrink the browser window (remembering that some people don’t see too well, and others want to see more). Does anything disappear?

It might be tempting for some people to to dismiss shallow bugs. “That’s a edge case.” “No user will do that.” “That’s not the right way to use the product.” “The users should read the manual.” Sometimes those things might even be true. Dismissing shallow bugs too casually, without investigation, could be a mistake, though.

Quick, shallow testing is like panning for gold: you probably won’t make much from the flakes and tiny nuggets on their own, but if you do some searching farther upstream, you might hit the mother lode. That is: shallow bugs should prompt at least some suspicion about the risk of deeper, more systemic problems and failure patterns about the product. In the coverage outline and risk list you’re developing, highlight areas where you’ve encountered those shallow bugs. Make these part of your ongoing testing story.

Now: you might think you don’t have time for quick testing, or to investigate those little problems that lead you to big problems. “Management wants me to finish running through all these test cases!” “Management wants to me to turn these test cases into automated checks!” “Management needs me to fix all these automated checks that got out of sync with the product when it changed!”

If those are your assignments from management, you may feel like your testing work is being micromanaged, but is it? Consider this: if managers were really scrutinizing your work carefully, there’s a good chance that they would be horrified at the time you’re spending on paperwork, or on fighting with your test tools, or on trying to teach a machine to recognise buttons on a screen, only to push them to repeatedly to demonstrate that something can work. And they’d probably be alarmed at how easily problems can get past these approaches, and they’d be surprised at the volume of bugs you’re finding without them—especially if you’re not reporting how you’re really finding the bugs.

Because managers are probably not observing you every minute of every day, you may have more opportunity for quick tests than you think, thanks to disposable time.

Disposable time, in the Rapid Software Testing namespace, is our term for time that you can afford to waste without getting into trouble; time when management isn’t actually watching what you’re doing; moments of activity that can be invested to return big rewards. Here’s a blog post on disposable time.

You almost certainly have some disposable time available to you, yet you might be leery about using it.

For instance, maybe you’re worried about getting into trouble for “not finishing the test cases”. It’s a good idea to cover the product with testing, of course, but structuring testing around “test cases” might be an unhelpful way to frame testing work, and “finishing the test cases” might be a kind of goal displacement, when the goal is finding bugs that matter.

Maybe your management is insisting that you create automated GUI checks, a policy arguably made worse by intractable “codeless” GUI automation tools that are riddled with limitations and bugs. This is not to say that automated checking is a bad thing. On the contrary; it’s a pretty reasonable idea for developers to to automate low-level output checks that give them fast feedback about undesired changes. It might also be a really good idea for testers to exercise the product using APIs or scriptable interfaces for testing. But why should testers be recapitulating developers’ lower-level checks while pointing machinery at the machine-unfriendly GUI? As my colleague James Bach says, “When it comes to technical debt, GUI automation is a vicious loan shark.”

If you feel compelled to focus on those assignments, consider taking a moment or two, every now and again, to perform a quick test like the ones above. Even if your testing is less constrained and you’re doing deliberative testing that you find valuable, it’s worthwhile to defocus on occasion and try a quick test. If you don’t find a bug, oh well. There’s a still good chance that you’ll have learned a little something about the product.

If you do find a bug and you only have a couple of free moments, at least note it quickly. If you have a little more time, try investigating it, or looking for a similar bug nearby. If you have larger chunks of disposable time, consider creating little tools that help you to probe the product; writing a quick script to generate interesting data; popping open a log file and scanning it briefly. All focus and no defocus makes Jack—or Jill—a dull tester.

Remember: almost always, the overarching goal of testing is to evaluate the product by learning about it, with a special focus on finding problems that matter to developers, managers, and customers. How do we get time to do that in the most efficient way we can? Quick, shallow tests can provide us with some hints on where to suspect risk. Once found, those problems themselves can help to provide evidence that more time for deep testing might be warranted.

Several years ago, I was listening while James Bach was teaching a testing workshop. “If I find enough important problems quickly enough,” he said, “the managers and developers will be all tied up in arguing about how to fix them before the ship date. They’ll be too busy to micromanage me; they’ll leave me alone.”

You can achieve substantial freedom to captain your own ship of testing work when you consistently bring home the gold to developers and managers. The gold, for testers, is awareness and evidence of problems that make managers say “Ugh… but thank heavens that the tester found that problem before we shipped.”

If you’re using a small fraction of your time to find problems and explore more valuable approaches to finding them, no one will notice on those rare occasions when you’re not successful. But if you are successful, by definition you’ll be accomplishing something valuable or impressive. Discovering shallow bugs, treating them as clues that point us towards deeper problems, finding those, and then reporting responsibly can show how productive spontaneous bursts of experimentation can be. The risks you expose can earn you more time and freedom to to deeper, more valuable testing.

Which brings us to back to the first question, way above: “How can I show management the value of testing?”

Even a highly disciplined and well-coordinated development effort will result in some bugs. If you’re finding bugs that matter—hidden, rare, elusive, emergent, surprising, important, bone-chilling problems that have got past the disciplined review and testing that you, the designers and the developers have done already—then you won’t need to do much convincing. Your bug reports and risk lists will do the convincing for you. Rapid study of the product space builds your mental models and points to areas for deeper examination. Quick, cheap little experiments help you to learn the product, and to find problems point to deeper problems. Finding those subtle, startling, deep problems starts with shallow testing that gets deeper over time.


Rapid Software Testing Explored for Europe and points east runs November 22-25, 2021. A session for daytime in the Americas and evenings in Europe runs January 17-20, 2022.

If We Do Sanity Testing Before Release, Do We Have To Do Regression Testing?

Monday, December 3rd, 2018

Here is an edition of the reply I offered to a question that someone asked on Quora. Bear in mind that it might be a good idea to follow the links for context.

If we do sanity testing before release, do we have to do regression testing?

What if I told you Yes? What if I told you No?

Some questions shouldn’t be answered. That is: some questions shouldn’t be answered with a Yes or a No without addressing the context first. No one can give you a good answer to your question unless they know you, your product, and your project’s context.

Even after that problem is addressed, people outside your context may not know what you mean by regression testing or sanity testing, and you can’t be sure of knowing what they mean. That applies to other terms in the conversation, too; maybe they’ll talk about “manual testing”; I don’t believe there’s such a thing as “manual testing”. Maybe you agree with them now; maybe you’ll agree with me after you’ve read the linked post. Or maybe after you read this one.

Some people will suggest that regression testing and sanity testing are fundamentally different somehow; I’d contend that a sanity test may be a shallow form of regression testing, when the sanity test is what I’ve talked about here, and when regression testing is testing focused on regression- or change-related risk. In order to sort that out, you’d have to talk it through to make sure that you’re not in shallow agreement.

Nonetheless, some people will try to answer your question. To prepare you for some of those answers: it’s probably not very helpful to think about needing to do one kind of testing or the other. It’s probably more helpful to think in terms of what you and your organization want to do, and choosing what to do based on what (you believe) you know about your product, and what (you believe) you want to know about it, given the situation.

While this is not an exhaustive list, here are a few factors to consider:

  • Do you and the developers already have a lot of experience with your product?
  • Is your product being developed in a careful, disciplined way?
  • Are the developers making small, simple, incremental changes that they comprehend the risks well?
  • Is the product relatively well insulated from dependencies on platforms (hardware, operating systems, middleware, browers…) that vary a lot?
  • Are there already plenty of unit-level checks in place, such that the developers are likely to be aware of low-level coding errors early and easily?
  • Is it unusual to do a shallow pass through the features of the product and find bugs that are sticking out like a sore thumb?
  • Do you and the developers feel like they’re working at a sustainable pace?

If the answer to all of those questions is Yes, then maybe your regression testing can afford to be more focused on deep, rare, hidden, subtle, emergent problems, which are unlikely to be revealed by a sanity test. Or maybe your product (or a given feature, or a given change, or whatever you’re focused on) entails relatively low risk, so deep regression testing isn’t necessary and a sanity test will do. Or maybe your product is poorly-understood and has changed a lot, so both sanity checking and deep regression testing could be important to you.

I can offer things for you to think about, but I don’t think it’s appropriate for me or anyone else to answer your question for you. The good news is that if you study testing seriously, practice testing, and learn to test, you’ll be able to make this determination in collaboration with your team, and answer the question for yourself.

James Bach and I teach Rapid Software Testing to help people to become smart, powerful, helpful, independent testers, with agency over their work. If you want help with learning about Rapid Software Testing for yourself or for your team, find out how you can attend a public class, live or on line, or request one in-house.

Exploratory Testing on an API? (Part 2)

Tuesday, July 17th, 2018

Summary:  Loops of exploration, experimentation, studying, modeling, and learning are the essence of testing, not an add-on to it. The intersection of activity and models (such as the Heuristic Test Strategy Model) help us to perform testing while continuously developing, refining, and reviewing it. Testing is much more than writing a bunch of automated checks to confirm that the product can do something; it’s an ongoing investigation in which we continuously develop our understanding of the product.

Last time out, I began the process of providing a deep answer to this question:

Do you perform any exploratory testing on APIs? How do you do it?

That started with reframing the first question

Do you perform any exploratory testing on APIs?

into a different question

Given a product with an API, do you do testing?

The answer was, of course, Yes. This time I’ll turn to addressing the question “How do you do it?” I’ll outline my thought process and the activities that I would perform, and how they feed back on each other.

Note that in Rapid Software Testing, a test is an action performed by a human; neither a specific check nor a scripted test procedure. A test is a burst of exploration and experiments that you perform. As part of that activity, a test include thousands of automated checks within it, or just one, or none at all. Part of the test may be written down, encoded as a specific procedure. Testing might be aided by tools, by documents or other artifacts, or by process models. But the most important part of testing is what testers think and what testers do.

(Note here that when I say “testers” here, I mean any person who is either permanently or temporarily in a testing role. “Tester” applies to a dedicated tester; a solo programmer switching from the building mindset to the tester mindset; or a programmer or DevOps person examining the product in a group without dedicated testers.)

It doesn’t much matter where I start, because neither learning nor testing happen in straight lines. They happen in loops, cycles, epicycles; some long and some short; nested inside each other; like a fractal. Testing and learning entail alternation between focusing and defocusing; some quick flashes of insight, some longer periods of reflection; smooth progress at some times, and frequent stumbling blocks at others. Testing, by nature, is an exploratory process involving conversation, study, experimentation, discovery, investigation that leads to more learning and more testing.

As for anything else I might test, when I’m testing a product through an API, I must develop a strategy. In the Rapid Software Testing namespace, your strategy is the set of ideas that guide the design, development, and selection of your tests.

Having the the Heuristic Test Strategy Model in my head and periodically revisiting it helps me to develop useful ideas about how to cover the product with testing. So as I continue to describe my process, I’ll annotate what I’m describing below with some of the guideword heuristics from the HTSM.
The references will look like this.

A word of caution, though:  the HTSM isn’t a template or a script.  As I’m encountering the project and the product, test ideas are coming to me largely because I’ve internalized them through practice, introspection, review, and feedback.  I might use the HTSM generatively, to help ideas grow if I’m having a momentary drought; I might use it retrospectively as a checklist against which I review and evaluate my strategy and coverage ideas; or I might use it as a means of describing testing work and sharing ideas with other people, as I’m doing here.

Testing the RST way starts with evaluating my context. That starts with taking stock of my mission, and that starts with the person giving me my mission. Who is my client—that is, to whom am I directly answerable? What does my client want me to investigate?

I’m helping someone—my client, developers, or other stakeholders—to evaluate the quality of the product. Often when we think about value, we think about value to paying customers and to end users, but there are plenty of people who might get value from the product, or have that value threatened. Quality is value to some person who matters, so whose values do we know might matter? Who might have been overlooked?
Project Environment/Mission

Before I do anything else, I’ll need to figure out—at least roughly—how much time I’ll have to accomplish the mission. While I’m at it, I’ll ask other time-related questions about the project: are there any deadlines approaching? How often do builds arrive? How much time should I dedicate to preparing reports or other artifacts?
Project Environment/Schedule

Has anyone else tested this product? Who are they? Where are they? Can I talk to them? If not, did they produce results or artifacts that will help me? Am I on a team? What skills do we have? What skills do we need?
Project Environment/Test Team

What does my client want to me to provide? A test report, almost certainly, and bug reports, probably—but in what form? Oral conversations or informally written summaries? I’m biased towards keeping things light, so that I can offer rapid feedback to clients and developers. Would the client prefer more formal appoaches, using particular reporting or management tools? As much as the client might like that, I’ll also note whenever I see costs of formalization.

What else might the client, developers, and other stakeholders want to see, now or later on? Input that I’ve generated for testing? Code for automated checks? Statistical test results? Visualizations of those results? Tools that I’ve crafted and documentation for them? A description of my perception of the product? Formal reports for regulators and auditors?
Project Environment/Deliverables

I’ll continue to review my mission and the desired deliverables throughout the project.

Having checked on my mission, I proceed to simple stuff so that I can start the process of learning about the product. I can start with any one of these things, or with two or more of them in parallel.

So what is this thing I’m about to test? What is there to know?
Project Environment/Test Item
Product Elements

I talk to the developers, if they’re available. Even better, I participate in design and planning sessions for the product, if I can. My job at such meetings is to learn, to advocate for testability, to bring ideas and ask questions about problems and risks. I ask about testing that the developers have done, and the checking that they’ve set up.
Project Environment/Developer Relations

If I’ve been invited to the party late or not at all, I’ll make a note of it. I want to be as helpful as possible, but I also want to keep track of anything that makes my testing harder or slower, so that everyone can learn from that. Maybe I can point out that my testing will be better-informed the earlier and the more easily I can engage with the product, the project, and the team.

I examine the documentation for the API and for the rest of the product.
Project Environment/Information

I want to develop an understanding of the product: the services it offers, the means of controlling it, and its role in the systems that surround it. I annotate the documentation or take separate notes, so that I can remember and discuss my findings later on. As I do so, I pay special attention to things that seem inconsistent or confusing.

If I’m confused, I don’t worry about being confused. I know that some of my confusion will dissipate as I learn about the product. Some of my confusion might suggest that there are things that I need to learn. Some of my confusion might point to the risk that the users of the product will be confused too. Confusion can be a resource, an oracle, and a motivator, as long as I don’t mind being confused for a while.

As I’m reading the documentation, I ask myself “What simple, ordinary, normal things can I do with the product?” If I have the product available, I’ll do sympathetic testing by trying a few basic requests, using a tool that provides direct interaction with the product through its API. Perhaps it’s a tool developed in-house; perhaps it’s a tool crafted for API testing like Postman or SOAPUI; or maybe I’ll use an interpreter like Ruby’s IRB along with some helpful libraries like Net::HTTP or HTTParty.
Project Environment/Equipment and Tools

I might develop a handful of very simple scripts, or I might retain logs that the tool or the interpreter provides. I’m just as likely to throw this stuff away as I am to keep it. At this stage, my focus is on learning more than on developing formal, reusable checks. I’ll know better how to test and check the product after I’ve tried to test it.

If I find a bug—any kind of inconsistency or misbehaviour that threatens the value of the product—I’ll report it right away, but that’s not all I’ll report. If I have any problems with trying to do sympathetic testing, I’ll report them immediately. They may be usability problems, testability problems, or both at once. At this stage of the project, I’ll bias my choices towards the fastest, least expensive, and least formal reporting I can do.

My primary goal at this point, though, is not to find bugs, but to figure out how people might use the API to get access to the product, how they might get value from it, and how that value might be threatened. I’m developing my models of the product; how it’s intended to work, how to use it, and how to test it. Learning about the product in a comprehensive way prepares me to find better bugs—deeper, subtler, less frequent, more damaging.

To help the learning stick, I aspire to be a good researcher: taking notes; creating diagrams; building lists of features, functions, and risks; making mind maps; annotating existing documentation. Periodically I’ll review these artifacts with programmers, managers, or other colleagues, in order to test my learning.

Irrespective of where I’ve started, I’ll iterate and go deeper, testing the product and refining my models and strategies as I go. We’ll look at that in the next exciting installment.