Blog Posts from May, 2016

The Honest Manual Writer Heuristic

Monday, May 30th, 2016

Want a quick idea for a a burst of activity that will reveal both bugs and opportunities for further exploration? Play “Honest Manual Writer”.

Here’s how it works: imagine you’re the world’s most organized, most thorough, and—above all—most honest documentation writer. Your client has assigned you to write a user manual, including both reference and tutorial material, that describes the product or a particular feature of it. The catch is that, unlike other documentation writers, you won’t base your manual on what the product should do, but on what it does do.

You’re also highly skeptical. If other people have helpfully provided you with requirements documents, specifications, process diagrams or the like, you’re grateful for them, but you treat them as rumours to be mistrusted and challenged. Maybe someone has told you some things about the product. You treat those as rumours too. You know that even with the best of intentions, there’s a risk that even the most skillful people will make mistakes from time to time, so the product may not perform exactly as they have intended or declared. If you’ve got use cases in hand, you recognize that they were written by optimists. You know that in real life, there’s a risk that people will inadvertently blunder or actively misuse the product in ways that its designers and builders never imagined. You’ll definitely keep that possibility in mind as you do the research for the manual.

You’re skeptical about your own understanding of the product, too. You realize that when the product appears to be doing something appropriately, it might be fooling you, or it might be doing something inappropriate at the same time. To reduce the risk of being fooled, you model the product and look at it from lots of perspectives (for example, consider its structure, functions, data, interfaces, platform, operations, and its relationship to time; and business risk, and technical risk). You’re also humble enough to realize that you can be fooled in another way: even when you think you see a problem, the product might be working just fine.

Your diligence and your ethics require you to envision multiple kinds of users and to consider their needs and desires for the product (capability, reliability, usability, charisma, security, scalability, performance, installability, supportability…). Your tutorial will be based on plausible stories about how people would use the product in ways that bring value to them.

You aspire to provide a full accounting of how the product works, how it doesn’t work, and how it might not work—warts and all. To do that well, you’ll have to study the product carefully, exploring it and experimenting with it so that your description of it is as complete and as accurate as it can be.

There’s a risk that problems could happen, and if they do, you certainly don’t want either your client or the reader of your manual to be surprised. So you’ll develop a diversified set of ways to recognize problems that might cause loss, harm, annoyance, or diminished value. Armed with those, you’ll try out the product’s functions, using a wide variety of data. You’ll try to stress out the product, doing one thing after another, just like people do in real life. You’ll involve other people and apply lots of tools to assist you as you go.

For the next 90 minutes, your job is to prepare to write this manual (not to write it, but to do the research you would need to write it well) by interacting with the product or feature. To reduce the risk that you’ll lose track of something important, you’ll probably find it a good idea to map out the product, take notes, make sketches, and so forth. At the end of 90 minutes, check in with your client. Present your findings so far and discuss them. If you have reason to believe that there’s still work to be done, identify what it is, and describe it to your client. If you didn’t do as thorough a job as you could have done, report that forthrightly (remember, you’re super-honest). If anything that got in the way of your research or made it more difficult, highlight that; tell your client what you need or recommend. Then have a discussion with your client to agree on what you’ll do next.

Did you notice that I’ve just described testing without using the word “testing”?

Testers Don’t Prevent Problems

Wednesday, May 4th, 2016

Testers don’t prevent errors, and errors aren’t necessarily waste.

Testing, in and of itself, does not prevent bugs. Platform testing that reveals a compatibility bug provides a developer with information. That information prompts him to correct an error in the product, which prevents that already-existing error from reaching and bugging a customer.

Stress testing that reveals a bug in a function provides a developer with information. That information helps her to rewrite the code and remove an error, which prevents that already-existing error from turning into a bug in an integrated build.

Review (a form of testing) that reveals an error in a specification provides a product team with information. That information helps the team in rewriting the spec correctly, which prevents that already-existing error from turning into a bug in the code.

Transpection (a form of testing) reveals an error in a designer’s idea. The conversation helps the designer to change his idea to prevent the error from turning into a design flaw.

You see? In each case, there is an error, and nothing prevented it. Just as smoke detectors don’t prevent fires, testing on its own doesn’t prevent problems. Smoke detectors direct our attention to something that’s already burning, so we can do something about it and prevent the situation from getting worse. Testing directs our attention to existing errors. Those errors will persist—presumably with consequences—unless someone makes some change that fixes them.

Some people say that errors, bugs, and problems are waste, but they are not in themselves wasteful unless no one learns from them and does something about them. On the other hand, every error that someone discovers represents an opportunity to take action that prevents the error from becoming a more serious problem. As a tester, I’m fascinated by errors. I study errors: how people commit errors (bug stories; the history of engineering), why they make errors (fallible heuristics; cognitive biases), where we we might find errors (coverage), how we might recognize errors (oracles). I love errors. Every error that is discovered represents an opportunity to learn something—and that learning can help people to change things in order to prevent future errors.

So, as a tester, I don’t prevent problems. I play a role in preventing problems by helping people to detect errors. That allows those people to prevent those errors from turning into problems that bug people.

Still squeamish about errors? Read Jerry Weinberg’s e-book, Errors: Bugs, Boo-boos, Blunders.