Blog Posts from March, 2018

Interview on Rapid Software Testing

Tuesday, March 27th, 2018

Not too long ago, I had a conversation with Joe Colantonio. Joe asks questions that prompt answers about the nature of testing, accountability, and telling the testing story.

Enjoy!

Very Short Blog Posts (33): Insufficient Information and Insufficient Time

Monday, March 19th, 2018

Here’s a question I get from testers quite a lot:

“What do I do when the developers give me something to test with insufficient information and time to test it?”

Here’s my quick answer: test it.

Here’s my practical answer: test it with whatever time and information you have available. (Testing is evaluating a product by learning about it through exploration and experimentation.) When your time is up, provide a report on what you have learned about the product, with particular focus on any problems you have found.

Identify the important risks and product factors of which you are aware, and which you have covered. (A product factor, or product element, is something that can be examined during a test, or that could influence the outcome of a test.) Identify important risks and product factors that you’re aware of and that you haven’t covered. Note the time and sources of information that you had available to you.

If part of the product or feature is obscure to you because you perceive that you have had insufficient information or time or testabilty to learn about it, include that in your report.

(I’ll provide a deep answer to the question eventually, too.)

Related posts:

How Is the Testing Going?
Testability
Testing Problems Are Test Results

Four (and More) Questions for Testers to Ask

Sunday, March 11th, 2018

Testers investigate problems and risk. Other people manage the project, design the product, and write the code. As testers, we participate in that process, but in a special way and from a special perspective: it’s our primary job to anticipate, seek, and discover problems.

We testers don’t prevent problems; we don’t design or build or fix the product. We may help to prevent existing problems from going any farther, by discovering bugs, misunderstandings, issues, and risks and bringing them to light. With our help, the people who build and manage the project can address the problems we have revealed, and prevent worse problems down the line.

Over the last while, I’ve been working with clients that are “shifting left”, “going Agile”, “doing DevOps”, or “getting testers involved early”. Typically this takes the form of having a tester present for design discussions, planning meetings, grooming sessions, and the like.

This is usually a pretty good idea; if there is no one in the testing role, people tend not to think very deeply about testing—or about problems or risk. That’s why, even if you don’t have someone called “tester” on the team, it’s an awfully good idea to have someone in the testing role and the testing mindset. Here, I’ll call that person “tester”.

Alas, I’ve sometimes observed that, once invited to the meetings, testers are sometimes uncertain about what they’re doing there.

A while back, I proposed at least four things for testers to do in planning meetings: learning; advocating for testability; challenging what we’re hearing; and establishing our roles as testers. These activities help to enable sensemaking and critical thinking about the product and the project. How can testers do these things successfully? Here’s a set of targeted questions.

What are we building? Part of our role as testers is to come to a clear understanding of the system, product, feature, function, component, or service that we’re being asked to test. (I’ll say “product” from here on, but remember I could be referring to anything in the list.) We could be talking about the product itself or a representation of it. We could be looking at a diagram of it;reviewing a document or description of it; evaluating a workflow; playing with a prototype. Asking for any these can help if we don’t have them already. A beneficial side effect is helping to refine everyone’s understanding of the product—and how we’d achieve successful completion of the project or task.

So we might also ask: What will be there when we’ve built it? What are the bits and pieces? (Can we see a diagram?) What are the functions that the product offers; what should the product do? What gets input, processed, and output? (Do we have a data dictionary?) What does the product depend upon? What depends on the product? (Has someone prepared a list of dependencies? A list of what’s supported and what isn’t?)

For whom are we building it? If we’re building a product, we’re ultimately building it for people to use. Sometimes we make the mistake of over-focusing on a particular kind of user: the person who is immediately encountering the product, with eyes on screen and fingers on keyboard, mouse, or glass. Often, however, that person is an agent for someone else—for a bank teller’s application, think of the bank teller, but also think of the customer on the other side of the counter; the bank’s foreign exchange traders; the bank teller’s manager. Beyond using the product, there are other stakeholders: those who support it, connect to its APIs, test it, document it, profit from it, or defend it in court.

So we might also ask: Who else is affected by this product? Who do they work for, or with? What matters to them? (These questions are targeted towards operations value-related testability.) Who will support the product? Maintain it? Test it? Document it?

What could go wrong? The most important questions for testers to raise are questions about problems and risks. Developers, designers, business people, or others might discuss features or functions, but people who are focused on building a product are not always focused on how things could go badly. Switching from a builder’s mindset to a tester’s mindset is difficult for builders. For testers, it’s our job.

So we might also ask: What Bad Things could happen? What Good Things could fail to happen? Under what conditions might they happen or not happen? What might be missing? What might be there when it shouldn’t be there? And for whom are we not building this product—like hackers or thieves?

When something goes wrong, how would we know? Once again, this is a question about testability, and also a question about oracles. As James Bach has said, “software testing is the infinite art of comparing the invisible to the ambiguous to prevent the unthinkable from happening to the anonymous”. For any non-trivial program, there’s a huge test space to cover, and bugs and failures don’t always announce themselves. Part of our job is to think of the unthinkable and to help those invisible things to become visible so that we can find problems—ideally in the lab before we ship. Some problems might escape the lab (or our continuous deployment checks, if we’re doing that).

So we might also ask: How might we miss something going wrong? What do we need for intrinsic testability—at the very least, log files, scriptable interfaces, and code that has been reviewed, tested, and fixed as it’s being built. And what about subjective testability? Do we have the domain knowledge to recognize problems? What help might we need to obtain that? Do we have the specialist skills—in (for example) security, performance, or tooling—on the team? Do we need help there? If we’re working in a DevOps context,
doing live site testing or testing in production, how would we detect problems rapidly?

In sprint planning meetings, or design discussions, or feature grooming sessions, questions like these are important. Questions focused on problems don’t come naturally to many people, but asking such questions should be routine for testers. While everyone else is envisioning success, it’s our job to make sure that we’re anticipating failure. When everyone else is focused on how to build the product, it’s important for us to keep an eye on how the entire team can study and test it. When everyone else is creatively optimistic, it’s important for us to be pragmatically pessimistic.

None of the activities in planning and review replace testing of the product that is being built. But when we participate in raising problems and risks early on, we can help the team to prevent those problems—including problems that make testing harder or slower, allowing more bugs to survive undetected. Critical thinking now helps to enable faster and easier testing and development later.

Now a word from our sponsor: I help testers, developers, managers, and teams through consulting and training in Rapid Software Testing (RST). RST is a skill set and a mindset of testing focused on sharpening critical thinking, eliminating waste, and identifying problems that threaten the value of the product or the project, and the principles can be adapted to any development approach. If you need help with testing, please feel free to get in touch.

Signing Off

Thursday, March 1st, 2018

Testers ask: “I’m often given a product to test, but not enough time to test it. How am I supposed to sign off on the release when I haven’t tested enough?”

My reply goes like this:

If you’re a tester, it seems profoundly odd to me that you are responsible for signing off on a release. The decision to release a product is a business decision, not a technical one. The decision is, of course, informed by technical considerations, so it’s entirely reasonable for you to provide a report on those considerations. It would be foolish for for someone in a business role to ignore your report. But it would be just as foolish for a someone in a business role to abdicate the business decision to technical people. We serve the business; we don’t run it, and technical people often aren’t privy to things in the business’ domain.

The idea that testers can either promote or prevent a release can be tested easily. Try refusing to sign off until you’re entirely happy with what you know about the product, and with the extent of what you know. You’ll get a fairly quick result.

Perhaps managers will go along with you. You’ll test the product until you are satisfied that you have covered everything important, that you have discovered all the significant problems, that those problems have been fixed, and that the discovery of any more problems that matter would be a big surprise to you. Your client (that is, the person to whom you report) will give you all the time and all the resources you need until the product meets your standards. Yes, that seems unlikely to me, too.

More likely, at some point you will be overruled. Management will decide that your concerns are not important enough to block the release. Then you will be able to say, “So, the decision to approve or prevent the release is not really my decision? Cool.” Then, perhaps silently, “I’m glad that’s your job after all. I’m happy not being a fake product owner.”

Without a product owner’s authority (and title, and salary), I’m not willing to sign—or even make—a go/no-go decision. That decision is not my responsibility, but the responsibility of people with the authority to make it. If they ask for my recommendation, I’ll provide a list of problems that I know about, and any reasons to believe that there might be more problems lurking. Then I will politely recommend that they weigh these against the needs and objectives of everyone else involved—development, operations, support, sales, marketing, finance, and so forth.

So what if someone asks you to sign something? I am willing to sign an account of what I know about the status of the product, and what I’ve done to learn about it. That account can be fairly concise, but in expanded form, it will look like this:

I typically start with “I’ve tested the product, and I’ve found these problems in it.” I then provide a list of a few problems that I believe to be most important to inform management’s decisions. For the rest, I’ll provide a summary, including patterns or classes of general problems, or pointers to problem reports. My special focus is on the problems; as the newspaper reporters will tell you, “if it bleeds it leads”. I’ll welcome requests for more information. If there’s any doubt about the matter, I emphasize that the decision to ship or not rests with the person responsible for making the release decision.

(Some people claim “the whole team decides when to release and when not to”. That’s true when the whole team agrees, or when disagreements are tractable. When they’re not, in every business situation I’ve been in, there is a single person who is ultimately responsible for the release decision.)

If I haven’t found any problems—which is rare—I won’t sign anything claiming that there are no problems in the product. I’m willing to assert that I’m not aware of any problems. I cannot responsibly say that there are no problems, or that I’m capable of finding all problems. To say that there are no problems is only an inference; to say that I’m not aware of any problems is a fact.

Whether I’ve found problems or not, I’m willing to sign a statement like this: “I have covered these conditions to this degree, and I have not covered these other conditions.” The conditions include specific product factors and quality criteria, like those found in the Heuristic Test Strategy model, or others that are specific to the project’s context. This gives warrant to my statement that there are problems (or that I’m not aware of them), and identifies why management should be appropriately trusting and appropriately skeptical about my evaluation. For an informed release decision, management needs to know about things I haven’t covered, and my perception of the risk associated with not covering them.

Happy news about the product might be worth mentioning, but it takes second place to reporting the problems and risks. I want to make sure that any concerns I have are prominent and not buried in my report.

I’m also willing to sign a statement saying “Here are some of the things that helped me, and here are some of the things that didn’t help; things that slowed my testing down, made it more difficult, reduced the breadth and depth of coverage I was able to obtain.” Whether I sign such a statement or not, I want to make sure I’ve been heard. I also want to offer some ideas that address the obstacles, and note that with management help, maybe we can reduce or remove some of them so that I can provide more timely, more valuable coverage of the product. When I can do that, I can find deeper, or more subtle, or more intermittent, and possibly more dangerous bugs.

Of course, I don’t run the project. There may be business considerations that prevent management from helping me to address the obstacles. If I’ve been heard, I’ll play the hand I’ve been dealt; I’ll do my best to address any problems I’ve got, using any resources I can bring to bear. It’s my job to make management aware of any risks associated with not dealing with the obstacles—on paper or in a quick email, if I’m worried about accountability. After that, decisions on how to manage the project belong with management.

In other words: I’m prepared to sign off on a three-part testing story. As a tester, I’m prepared to accept responsibility for my story about the quality of the product, but the product does not have to adhere to my quality standards. I’ll sign off on a report, but not on a decision. The business doesn’t need my permission to ship.