Blog Posts for the ‘Testability’ Category

Four (and More) Questions for Testers to Ask

Sunday, March 11th, 2018

Testers investigate problems and risk. Other people manage the project, design the product, and write the code. As testers, we participate in that process, but in a special way and from a special perspective: it’s our primary job to anticipate, seek, and discover problems.

We testers don’t prevent problems; we don’t design or build or fix the product. We may help to prevent existing problems from going any farther, by discovering bugs, misunderstandings, issues, and risks and bringing them to light. With our help, the people who build and manage the project can address the problems we have revealed, and prevent worse problems down the line.

Over the last while, I’ve been working with clients that are “shifting left”, “going Agile”, “doing DevOps”, or “getting testers involved early”. Typically this takes the form of having a tester present for design discussions, planning meetings, grooming sessions, and the like.

This is usually a pretty good idea; if there is no one in the testing role, people tend not to think very deeply about testing—or about problems or risk. That’s why, even if you don’t have someone called “tester” on the team, it’s an awfully good idea to have someone in the testing role and the testing mindset. Here, I’ll call that person “tester”.

Alas, I’ve sometimes observed that, once invited to the meetings, testers are sometimes uncertain about what they’re doing there.

A while back, I proposed at least four things for testers to do in planning meetings: learning; advocating for testability; challenging what we’re hearing; and establishing our roles as testers. These activities help to enable sensemaking and critical thinking about the product and the project. How can testers do these things successfully? Here’s a set of targeted questions.

What are we building? Part of our role as testers is to come to a clear understanding of the system, product, feature, function, component, or service that we’re being asked to test. (I’ll say “product” from here on, but remember I could be referring to anything in the list.) We could be talking about the product itself or a representation of it. We could be looking at a diagram of it;reviewing a document or description of it; evaluating a workflow; playing with a prototype. Asking for any these can help if we don’t have them already. A beneficial side effect is helping to refine everyone’s understanding of the product—and how we’d achieve successful completion of the project or task.

So we might also ask: What will be there when we’ve built it? What are the bits and pieces? (Can we see a diagram?) What are the functions that the product offers; what should the product do? What gets input, processed, and output? (Do we have a data dictionary?) What does the product depend upon? What depends on the product? (Has someone prepared a list of dependencies? A list of what’s supported and what isn’t?)

For whom are we building it? If we’re building a product, we’re ultimately building it for people to use. Sometimes we make the mistake of over-focusing on a particular kind of user: the person who is immediately encountering the product, with eyes on screen and fingers on keyboard, mouse, or glass. Often, however, that person is an agent for someone else—for a bank teller’s application, think of the bank teller, but also think of the customer on the other side of the counter; the bank’s foreign exchange traders; the bank teller’s manager. Beyond using the product, there are other stakeholders: those who support it, connect to its APIs, test it, document it, profit from it, or defend it in court.

So we might also ask: Who else is affected by this product? Who do they work for, or with? What matters to them? (These questions are targeted towards operations value-related testability.) Who will support the product? Maintain it? Test it? Document it?

What could go wrong? The most important questions for testers to raise are questions about problems and risks. Developers, designers, business people, or others might discuss features or functions, but people who are focused on building a product are not always focused on how things could go badly. Switching from a builder’s mindset to a tester’s mindset is difficult for builders. For testers, it’s our job.

So we might also ask: What Bad Things could happen? What Good Things could fail to happen? Under what conditions might they happen or not happen? What might be missing? What might be there when it shouldn’t be there? And for whom are we not building this product—like hackers or thieves?

When something goes wrong, how would we know? Once again, this is a question about testability, and also a question about oracles. As James Bach has said, “software testing is the infinite art of comparing the invisible to the ambiguous to prevent the unthinkable from happening to the anonymous”. For any non-trivial program, there’s a huge test space to cover, and bugs and failures don’t always announce themselves. Part of our job is to think of the unthinkable and to help those invisible things to become visible so that we can find problems—ideally in the lab before we ship. Some problems might escape the lab (or our continuous deployment checks, if we’re doing that).

So we might also ask: How might we miss something going wrong? What do we need for intrinsic testability—at the very least, log files, scriptable interfaces, and code that has been reviewed, tested, and fixed as it’s being built. And what about subjective testability? Do we have the domain knowledge to recognize problems? What help might we need to obtain that? Do we have the specialist skills—in (for example) security, performance, or tooling—on the team? Do we need help there? If we’re working in a DevOps context,
doing live site testing or testing in production, how would we detect problems rapidly?

In sprint planning meetings, or design discussions, or feature grooming sessions, questions like these are important. Questions focused on problems don’t come naturally to many people, but asking such questions should be routine for testers. While everyone else is envisioning success, it’s our job to make sure that we’re anticipating failure. When everyone else is focused on how to build the product, it’s important for us to keep an eye on how the entire team can study and test it. When everyone else is creatively optimistic, it’s important for us to be pragmatically pessimistic.

None of the activities in planning and review replace testing of the product that is being built. But when we participate in raising problems and risks early on, we can help the team to prevent those problems—including problems that make testing harder or slower, allowing more bugs to survive undetected. Critical thinking now helps to enable faster and easier testing and development later.

Now a word from our sponsor: I help testers, developers, managers, and teams through consulting and training in Rapid Software Testing (RST). RST is a skill set and a mindset of testing focused on sharpening critical thinking, eliminating waste, and identifying problems that threaten the value of the product or the project, and the principles can be adapted to any development approach. If you need help with testing, please feel free to get in touch.

(At Least) Four Things for Testers To Do in Planning Meetings

Wednesday, October 18th, 2017

There’s much talk these days of DevOps, and Agile development, and “shift left”. Apparently, in these process models, it’s a revelation that testers can do more than test a built product, and that testers can and should be involved at every step of development.

In Rapid Software Testing, that’s not exactly news. From the beginning, we’ve rejected the idea that the product has to be complete, or has to pass some kind of “quality gate” or meet “acceptance criteria” before we start testing. We welcome the opportunity to test anything that anyone is willing to give us. We’ll happily do testing at any time from the moment someone has an idea for a product until long after the product has been released.

When testers are invited to planning meetings, there’s clearly no product to test. So what are we there for?

We’re there to learn. Testing is evaluating a product by learning about it through exploration and experimentation. At the meeting, there is a product to test. Running code is not the only kind of product we can test—not by a long shot. Ideas, designs, documents, drawings, and prototypes are products too. We can explore them, and perform thought experiments on them—and we can learn about them and evaluate them.

At the meeting, we’re there to learn about the product; to learn about the technology; to learn about the contexts in which the product will be used; to learn about plans for building the product. Our role is to become aware of all of the sources of information that might aid in our testing, and in development of the product generally. We’re there to find out about risks that threaten the value of the product in the short and long term, and about problems that might threaten the on-time, successful completion of the product.

We’re there to advocate for testability. Testability might happen by accident, without our help. It’s the role of a responsible tester to make sure that testability happens intentionally, by design. Note that testability is not just about stuff that’s intrinsic to the product. There are factors in the project, in our notions of value, and in our understanding of the risk gap that influence testability. Testability is also subjective with respect to us, our knowledge and skills, and our relationship to the team. So part of our jobs during preparation for development is to ask for the help we’ll need to make ourselves more powerful testers.

We’re there to challenge. Other people are in roles oriented towards building the product. They are focused on synthesis, and envisioning success. If they’re designers, they might be focused on helping the user to accomplish a task, on efficiency, or effectiveness, or on esthetics. If they’re business people, they might be focused on accomplishing some business goal, or meeting a deadline. Developers are often focused more on the details than on the big pictures. All of those people may be anxious to declare and meet a definition of “done”.

The testing role is to think critically about the product and the project; to ask how we might be fooling ourselves. We’re tilted towards asking good questions instead of getting “the right answer”; towards analysis more than synthesis; towards skepticism and suspicion more than optimism; towards anticipating problems more than seeking solutions. We can do those other things, but when we do, we pop for that moment out of a testing role and into a building role.

As testers, we’re trying to notice problems in what people are talking about in the meetings. We’re trying to identify obstacles that might hinder the user’s task; ways in which the product might be ineffective, inefficient, or unappealing. We’re trying to recognize how the business goal might not be met, or how the deadline could be blown. We’re alternating between small details and the big picture. We’re trying to figure out how our definition of done might be inadequate; how we might be fooling ourselves into believing we’re done when we’re not. We’re here to challenge the idea that something is okay when it might not be okay.

We’re there to establish our roles as testers. A role is a heuristic that helps in managing time, focus, and responsibility. The testing role is a commitment to perform valuable and necessary services: to focus on discovering problems, ideally early when they’re small, so that they can be prevented from turning into bigger problems later; to build a product and a project that affords rapid, inexpensive discovery and learning; and to challenge the ideas and artifacts that represent what we think we know about the product and its design. These tasks are socially, psychologically, emotionally, and politically difficult. Unless we handle them gracefully, our questioning, problem-focused role will be seen as merely disruptive, rather than an essential part of the process of building something excellent.

In Rapid Software Testing, we don’t claim that someone must be in the testing role, or must have the job title “tester”. We do believe that having someone responsible for the testing role helps to put focus on the task of providing helpful feedback. This should be a service to the project, not an obstacle. It requires us to maintain close social distance while maintaining a good deal of critical distance.

Of course, the four things that I’ve mentioned here can be done in any development model. They can be done not only in planning meetings, but at any time when we are engaging with others, at any time in the product’s development, at any level of granularity or formality. DevOps and Agile and “shift left” are context. Testing is always testing.

Some related posts:

What Exploratory Testing Is Not (Part 2): After-Everything-Else Testing

Exploratory Testing and Review

Exploratory Testing is All Around You

Testers Don’t Prevent Problems

What Is A Tester?

Testing is…

A Context-Driven Approach to Automation in Testing

Sunday, January 31st, 2016

(We interrupt the previously-scheduled—and long—series on oracles for a public service announcement.)

Over the last year James Bach and I have been refining our ideas about the relationships between testing and tools in Rapid Software Testing. The result is this paper. It’s not a short piece, because it’s not a light subject. Here’s the abstract:

There are many wonderful ways tools can be used to help software testing. Yet, all across industry, tools are poorly applied, which adds terrible waste, confusion, and pain to what is already a hard problem. Why is this so? What can be done? We think the basic problem is a shallow, narrow, and ritualistic approach to tool use. This is encouraged by the pandemic, rarely examined, and absolutely false belief that testing is a mechanical, repetitive process.

Good testing, like programming, is instead a challenging intellectual process. Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests. This is as true for testing as for development, or indeed as it is for any skilled occupation from carpentry to medicine.

You can find the article here. Enjoy!

Very Short Blog Posts (20): More About Testability

Monday, July 14th, 2014

A few weeks ago, I posted a Very Short Blog Post on the bare-bones basics of testability. Today, I saw a very good post from Adam Knight talking about telling the testability story. Adam focused, as I did, on intrinsic testability—things in the product itself that it more testable. But testability isn’t just a product attribute. In Heuristics of Testability (material we developed in a session of Rapid Software Testing Intensive Online), James Bach shows that testability is a set of relationships between product (“intrinsic testability”); project (“project-related testability”); tester (“subjective testability”); what we want from the product (“value-related testability”); and how we know what we know and what we need to know (“epistemic testability”).

Be sure of this: anything that makes testing harder or slower gives bugs more time or more opportunities to hide. In telling an expert and compelling story of our testing, it’s essential to identify and address things that make it harder to understand the product we’ve got—things that help to increase the risk that it won’t be the product our clients want.

Very Short Blog Posts (18): Ask for Testability

Saturday, May 3rd, 2014

Whether you’re working in an Agile environment or not, one of the tester’s most important tasks is to ask and advocate for things that make a product more testable. Where to start? Think about visibility—in its simplest form, log files—and controllability in the form of scriptable application programming interfaces (APIs).

Logs aren’t just for troubleshooting. Comprehensive log files can help to identify the data that was processed and the functions that were covered during testing. Logs can be parsed to gather statistics or processed with visualization tools to reveal interesting patterns of behaviour. Ask for consistent structure, precise time stamps, and configurable levels of logging.

A scriptable API affords the opportunity for testers to drive the program at high speed or high volume, in well-ordered, variable, or randomized sequences. A scripting interface can allow testers to observe the program’s data structures, query its internal states, or adjust its configuration quickly and easily. Use APIs and tools for more than functional checking; use them for sophisticated, automation-assisted exploration. As a bonus, an API can add to the value of your product by making its functions more accessible to your customers.

You can’t depend on getting log files and APIs without asking for them. So, starting with your current sprint, ask early and ask often.

Related posts:


Monday, July 6th, 2009

On Twitter, Kindly Reader @jrl7 (in real life, John Lambert at Microsoft) asks “Is there an example of testability that doesn’t involve improving ability to automate? (improved specs?)“.

(Update, June 5 2014: For a fast and updated answer, see Heuristics of Software Testability.)

Yup. If testing is questioning a product in order to evaluate it, then testability is anything that makes it easier to question or evaluate that product. So testability is anything that makes the program faster or easier to test on some level. Anything that slows down testing or makes it harder reduces testability, which gives bugs an opportunity to hide for longer, or to conceal themselves better.

To me, testability is enabled, extended, enhanced, accelerated or intensified by initiating or improving on some of the things below, either on their own or in combination with others. That suggests that testability ideas are media, in the McLuhan sense. Thus each idea comes with a cautionary note.  As McLuhan pointed out, when a medium is stretched beyond its original or intended capacities, it reverses into the opposite of the intended effect. So the following ideas are heuristic, which means that any of these things could help, but might fail to help or might make things worse if considered or applied automatically or unwisely.

In accordance with Joel Spolsky’s Law of Leaky Abstractions, I’ve classified them into a few leaky categories.

Product Elements

  • Scriptable interfaces to the product, so that we can drive it more easily with automation.
  • Logging of inputs, outputs, or activities within the program. Structure helps; time stamps help; a variety of levels of detail help.
  • Real-time monitoring of the internals of the application via another window, a debug port, or output over the network—anything like that. Printers, for example, often come with displays that can tell us something about what’s going on.
  • Internal consistency checks within the program. For example, if functions depend on network connectivity, and the connection is lost, the application can let us know that instead of simply failing a function.
  • Overall simplicity and modularity of the application, especially in the separation of user interface code from program code. This one needs to be balanced with the knowledge that simpler modules tends to mean more numerous modules, which leads to growth in the number of interfaces, which in turn may mean more interface problems. There are no free lunches, but maybe there are less expensive lunches. Note also that simplicity and complexity are not attributes of a program; they’re relationships between the program and the person observing it. What looks horribly complex to a tester might look simple and elegant to a programmer; what looks frightfully complex to the programmer might look straightforward to a marketer.
  • Use of resource files for localization support, rather than hard-coding of location-dependent strings, dialogs, currencies, time formats, and the like.
  • Readable and maintainable code, thanks to pairing or other forms of technical review, and to refactoring.
  • Restraint in platform support. Very generally, the fewer computers or operating systems or browsers or application framework versions or third-party library versions that we have to support, the easier the testing job will be.
  • Restraint in feature support. Very generally, and all other things being equal, the more features, the longer it takes to test a program.
  • Finally, but perhaps most importantly, an application that’s in good shape before the testers get to it.  That can be achieve, at least to some degree, by diligent testing by programmers.  That testing can be based on unit tests or (perhaps better yet) a test-first approach such as test- or behaviour-driven development.  Why does this kind of testing make a program more testable? If we’re investigating and reporting bugs that we find or (worse) spending time trying to work around blocking bugs, we slow down, and we’re compromised in our ability to obtain test coverage.


Things that make a program faster or easier to use tend to make it faster or easier to test, especially when testing at the user interface level. Any time you see a usability problem in an application, you may be seeing a testability problem too.

  • Ease of learning—that is, the extent to which the application allows the user to achieve expertise in its use.
  • Ease of use—that is, the extent to which the application supports the user in complete a task quickly and reliably.
  • Affordance—that is, the extent to which the application advertises its available features and functions to the user.
  • Clearer error and/or exception messages. This could include unique identifiers to help us to target specific points in the code, a notion of what the problem was, or which file was not found, thank you.


An oracle is a principle or mechanism by which we might recognize a problem. Information about how the system is intended to work is a source of oracles.

  • Better documentation. Note that, for testability purposes, “better” documentation doesn’t necessarily mean “more complete” or “more elaborate” or “thicker”. It might mean “more concise” or “more targeted towards testing” or “more diagrams and illustrations that allow us to visualize how things happen”.
  • Clear reference to relevant standards, and information as to when and why those standards might be relevant or not.
  • “Live oracles”—people who can help us in determining whether we’re seeing appropriate behaviour from the application, when that’s the most efficient mode of knowledge transfer. Programmers, business analysts, product owners, technical support people, end-users, more experienced testers—all are candidates for being live oracles.
  • Programs that give us a comparable result for some feature, function, or state within our product. Such programs may have been created within our organization or outside; they may have been created for testing or for some other purpose; they may be products that are competitors to our own.
  • Availability of old versions is a special case of the comparable program heuristic. Having an old version of a product around for comparison may help to make the current version of our program easier to test.

Equipment and Tools

  • Access to existing ad hoc (in the sense of “purpose-built”, not sloppy) test tools, and help in creating them where needed. Note that a test tool is not merely a program that probes the application under test. It might be a data-generation tool, an oracle program that supplies us with a comparable result, or a program that allows us to set up a new platform with minimal fuss.
  • Availability of test environments. In big organizations and on big projects, I’ve never worked with a test organization that believed it had sufficient isolated platforms for testing.

Build, Setup, and Configuration

  • More rapid building and integration of the product, including automated smoke tests that help us to determine if the program has been built correctly.
  • Simpler or accelerated setup of the application.
  • The ability to change settings or configuration of the application on the fly.
  • Access to source control logs help us to identify where a newly-discovered problem or a regression might have crept into the code.

Project and Process

  • Availability of modules separately for early testing, especially at the integration level.
  • Information about what has already been tested, so we can leverage the information or avoid repeating someone else’s efforts.
  • Access to source code for those of us who can read and interpret it.
  • Proximity of testers, programmers, and other members of the project community.
  • Project community support for testing. Testing is often made much longer and more complicated by constraints imposed by forces that are external to the project. IT managers, for good reasons of their own, are often reluctant to grant autonomy to test groups.
  • Tester skill is inter-related with testability. It might not make sense to put a scriptable interface into your product if you’re not going to use it yourself and you don’t anticipate your testers having the skill to use it either. That might sounds undesirable, yet over the years much great software has been produced without test automation assistance. Still, it’s usually worthwhile to have at least some members of the test team skilled in automation, and to give them a program for which those skills are useful.
  • Stability, or an absence of turbulence, both in the product and in the team that’s producing it. Things that are changing all the time are usually harder to test.

Want more ideas? Have a look at James Bach’s Heuristics of Testability. But in general, ask yourself, “What’s slowing me down in my ability to test this product, and how might I solve that problem?”

Postscript: Bret Pettichord contacted me with a link to this paper, at the end of which he surveys several different definitions of testability.