Blog Posts for the ‘Learning’ Category

Exploratory Testing on an API? (Part 2)

Tuesday, July 17th, 2018

Summary:  Loops of exploration, experimentation, studying, modeling, and learning are the essence of testing, not an add-on to it. The intersection of activity and models (such as the Heuristic Test Strategy Model) help us to perform testing while continuously developing, refining, and reviewing it. Testing is much more than writing a bunch of automated checks to confirm that the product can do something; it’s an ongoing investigation in which we continuously develop our understanding of the product.

Last time out, I began the process of providing a deep answer to this question:

Do you perform any exploratory testing on APIs? How do you do it?

That started with reframing the first question

Do you perform any exploratory testing on APIs?

into a different question

Given a product with an API, do you do testing?

The answer was, of course, Yes. This time I’ll turn to addressing the question “How do you do it?” I’ll outline my thought process and the activities that I would perform, and how they feed back on each other.

Note that in Rapid Software Testing, a test is an action performed by a human; neither a specific check nor a scripted test procedure. A test is a burst of exploration and experiments that you perform. As part of that activity, a test include thousands of automated checks within it, or just one, or none at all. Part of the test may be written down, encoded as a specific procedure. Testing might be aided by tools, by documents or other artifacts, or by process models. But the most important part of testing is what testers think and what testers do.

(Note here that when I say “testers” here, I mean any person who is either permanently or temporarily in a testing role. “Tester” applies to a dedicated tester; a solo programmer switching from the building mindset; or a programmer or DevOps person examining the product in a group without dedicated testers.)

It doesn’t much matter where I start, because neither learning nor testing happen in straight lines. They happen in loops, cycles, epicycles; some long and some short; nested inside each other; like a fractal. Testing and learning entail alternation between focusing and defocusing; some quick flashes of insight, some longer periods of reflection; smooth progress at some times, and frequent stumbling blocks at others. Testing, by nature, is an exploratory process involving conversation, study, experimentation, discovery, investigation that leads to more learning and more testing.

As for anything else I might test, when I’m testing a product through an API, I must develop a strategy. In the Rapid Software Testing namespace, your strategy is the set of ideas that guide the design, development, and selection of your tests.

Having the the Heuristic Test Strategy Model in my head and periodically revisiting it helps me to develop useful ideas about how to cover the product with testing. So as I continue to describe my process, I’ll annotate what I’m describing below with some of the guideword heuristics from the HTSM. The references will look like this.

A word of caution, though:  the HTSM isn’t a template or a script.  As I’m encountering the project and the product, test ideas are coming to me largely because I’ve internalized them through practice, introspection, review, and feedback.  I might use the HTSM generatively, to help ideas grow if I’m having a momentary drought; I might use it retrospectively as a checklist against which I review and evaluate my strategy and coverage ideas; or I might use it as a means of describing testing work and sharing ideas with other people, as I’m doing here.

Testing the RST way starts with evaluating my context. That starts with taking stock of my mission, and that starts with the person giving me my mission. Who is my client—that is, to whom am I directly answerable? What does my client want me to investigate?

I’m helping someone—my client, developers, or other stakeholders—to evaluate the quality of the product. Often when we think about value, we think about value to paying customers and to end users, but there are plenty of people who might get value from the product, or have that value threatened. Quality is value to some person who matters, so whose values do we know might matter? Who might have been overlooked? Project Environment/Mission

Before I do anything else, I’ll need to figure out—at least roughly—how much time I’ll have to accomplish the mission. While I’m at it, I’ll ask other time-related questions about the project: are there any deadlines approaching? How often do builds arrive? How much time should I dedicate to preparing reports or other artifacts? Project Environment/Schedule

Has anyone else tested this product? Who are they? Where are they? Can I talk to them? If not, did they produce results or artifacts that will help me? Am I on a team? What skills do we have? What skills do we need? Project Environment/Test Team

What does my client want to me to provide? A test report, almost certainly, and bug reports, probably—but in what form? Oral conversations or informally written summaries? I’m biased towards keeping things light, so that I can offer rapid feedback to clients and developers. Would the client prefer more formal appoaches, using particular reporting or management tools? As much as the client might like that, I’ll also note whenever I see costs of formalization.

What else might the client, developers, and other stakeholders want to see, now or later on? Input that I’ve generated for testing? Code for automated checks? Statistical test results? Visualizations of those results? Tools that I’ve crafted and documentation for them? A description of my perception of the product? Formal reports for regulators and auditors? Project Environment/Deliverables I’ll continue to review my mission and the desired deliverables throughout the project.

So what is this thing I’m about to test? Project Environment/Test Item Having checked on my mission, I proceed to simple stuff so that I can start the process of learning about the product. I can start with any one of these things, or with two or more of them in parallel.

I talk to the developers, if they’re available. Even better, I particpate in design and planning sessions for the product, if I can. My job at such meetings is to learn, to advocate for testability, to bring ideas and ask questions about problems and risks. I ask about testing that the developers have done, and the checking that they’ve set up. Project Environment/Developer Relations

If I’ve been invited to the party late or not at all, I’ll make a note of it. I want to be as helpful as possible, but I also want to keep track of anything that makes my testing harder or slower, so that everyone can learn from that. Maybe I can point out that my testing will be better-informed the earlier and the more easily I can engage with the product, the project, and the team.

I examine the documentation for the API and for the rest of the product. Project Environment/Information I want to develop an understanding of the product: the services it offers, the means of controlling it, and its role in the systems that surround it. I annotate the documentation or take separate notes, so that I can remember and discuss my findings later on. As I do so, I pay special attention to things that seem inconsistent or confusing.

If I’m confused, I don’t worry about being confused. I know that some of my confusion will dissipate as I learn about the product. Some of my confusion might suggest that there are things that I need to learn. Some of my confusion might point to the risk that the users of the product will be confused too. Confusion can be a resource, a motivator, as long as I don’t mind being confused.

As I’m reading the documentation, I ask myself “What simple, ordinary, normal things can I do with the product?” If I have the product available, I’ll do sympathetic testing by trying a few basic requests, using a tool that provides direct interaction with the product through its API. Perhaps it’s a tool developed in-house; perhaps it’s a tool crafted for API testing like Postman or SOAPUI; or maybe I’ll use an interpreter like Ruby’s IRB along with some helpful libraries like HTTParty. Project Environment/Equipment and Tools

I might develop a handful of very simple scripts, or I might retain logs that the tool or the interpreter provides. I’m just as likely to throw this stuff away as I am to keep it. At this stage, my focus is on learning more than on developing formal, reusable checks. I’ll know better how to test and check the product after I’ve tried to test it.

If I find a bug—any kind of inconsistency or misbehaviour that threatens the value of the product—I’ll report it right away, but that’s not all I’ll report. If I have any problems with trying to do sympathetic testing, I’ll report them immediately. They may be usability problems, testability problems, or both at once. At this stage of the project, I’ll bias my choices towards the fastest, least expensive, and least formal reporting I can do.

My primary goal at this point, though, is not to find bugs, but to figure out how people might use the API to get access to the product, how they might get value from it, and how that value might be threatened. I’m developing my models of the product; how it’s intended to work, how to use it, and how to test it. Learning about the product in a comprehensive way prepares me to find better bugs—deeper, subtler, less frequent, more damaging.

To help the learning stick, I aspire to be a good researcher: taking notes; creating diagrams; building lists of features, functions, and risks; making mind maps; annotating existing documentation. Periodically I’ll review these artifacts with programmers, managers, or other colleagues, in order to test my learning.

Irrespective of where I’ve started, I’ll iterate and go deeper, testing the product and refining my models and strategies as I go. We’ll look at that in the next installment.

Are you a tester—solo or in a group?  Or are you a developer, manager, business person, documenter, support person, or someone in DevOps who wants to get very good at testing?  Attend Rapid Software Testing in Seattle, presented by James Bach and me, September 26-28, 2018.  Sign up!

Finding the Happy Path

Wednesday, February 28th, 2018

In response to yesterday’s post on The Happy Path colleague and friend Albert Gareev raises an important issue:

Until we sufficiently learned about the users, the product, and the environment, we have no idea what usage pattern is a “happy path” and what would be the “edge cases”.

I agree with Albert. (See more of what he has to say here.) This points to a kind of paradox in testing and development: some say that we can’t test the product unless we know what the requirements are—yet we don’t know what many of the requirements are until we’ve tested! Testing helps to reduce ambiguity, uncertainty, and confusion about the product and about its requirements—and yet we don’t know how to test until we’ve tried to test!

Here’s how I might address Albert’s point:

To test a whole product or system means more than demonstrating that it can work, based on the most common or optimistic patterns of its use. We might start testing the whole system there, but if we wanted to develop a comprehensive understanding of it, we wouldn’t stop at that.

On the other hand, the whole system consists of lots of sub-systems, elements, components, and interactions with other things. Each of those can be seen as a system in itself, and studying those systems contributes to our understanding of the larger system.

We build systems, and we build ideas on how to test them. At each step, considering only the most capable, attentive, well-trained users; preparing only the most common or favourable environments; imagining only the most optimistic scenarios; performing only happy-path testing on each part of the product as we build it; all of these present the risk of misunderstanding not only the product but also the happy paths and edge cases for the greater system. If we want to do excellent testing, all of these things—and our understanding of them—must not only be demonstrated, but must be tested as well. This means we must do more than creating a bunch of high-level, automated, confirmatory checks at the beginning of the sprint, and then declaring victory when they all “pass”.

Quality above depends on quality below; excellent testing above depends on excellent testing below. It’s testing all the way down—and all the way up, too.

It’s Not A Factory

Tuesday, April 19th, 2016

One model for a software development project is the assembly line on the factory floor, where we’re making a buhzillion copies of the same thing. And it’s a lousy model.

Software is developed in an architectural studio with people in it. There are drafting tables, drawing instruments, good lighting, pens and pencils and paper. And erasers, and garbage cans that get full of coffee cups and crumpled drawings. Good ideas become better ideas as they are sketched, analysed, criticised, and revised. A lot of bad ideas are discovered and rejected before the final plans are drawn.

Software is developed in a rehearsal hall with people in it. The room is also filled with risers and chairs and other temporary staging elements, and with substitute props that stand in for the finished products. There’s a piano to accompany the singers while the orchestra is being rehearsed in another hall. Lighting, sound, costumes and makeup are designed and folded into the rehearsal process as we experiment with different ways of bringing the show to life. Everyone tries stuff that doesn’t work, or doesn’t fit, or doesn’t sound right, or doesn’t look good at first. Frustration arises, feelings get bruised, and then breakthroughs happen and problems get solved. Lots of experiments lead to that joyful and successful opening night.

Software is developed in a workshop with people in it; skilled craftspeople who build tools and workspaces for themselves and each other, as part of the process of crafting products for people to buy. Even though they try to keep the shop clean, there’s occasional sawdust and smoke and spilled glue and broken machinery. Work in progress gets tested, and weaknesses are exposed—sometimes late in the game—and get fixed.

In all of these places, variation is encouraged. Designs are tinkered with. Discoveries are celebrated. Learning happens. Most importantly, skill and tacit knowledge are both applied and developed.

The Lean model for software development might seem a more humane step forward from the older days, but it’s still based on the factory. Ideas aren’t widgets whose delivery you can schedule just in time. Failed experiments aren’t waste when you learn from them, and if you know it won’t be waste from the outset, it’s not really an experiment. Everything that makes it into the product should represent something that the customer values, but when we’re creating something novel (which we’re always doing to some degree as we’re building software), we’re exploring and trying things out to help refine our understanding of what the customer actually values.

If there is any parallel between software and manufacturing, it is this: the “software development” part of manufacturing happens before the assembly line—in the design studio, where the prototypes are being developed, refined, and selected for mass production. The manufacturing part? That’s the copy command that deploys a copy of the installation package to all the machines in the enterprise, or the disk duplicator that stamps out a million DVDs with copies of the golden master on it, or the Web server that delivers a copy of the product to anyone who requests it. Getting to that first copy, though? That’s a studio thing, not an assembly-line thing.

The primary inspiration for this post is a conversation I had with Cem Kaner in 2008. Another is the book Artful Making by Robert Austin and Lee Devin, which I first read around the same time. Yet another is Christopher Alexander’s A Pattern Language. One more: my long-ago career in theatre, which prepared me better than you can imagine for a life in software development.

On Scripting

Saturday, July 4th, 2015

A script, in the general sense, is something that constrains our actions in some way.

In common talk about testing, there’s one fairly specific and narrow sense of the word “script”—a formal sequence of steps that are intended to specify behaviour on the part of some agent—the tester, a program, or a tool. Let’s call that “formal scripting”. In Rapid Software Testing, we also talk about scripts as something more general, in the same kind of way that some psychologists might talk about “behavioural scripts”: things that direct, constrain, or program our behaviour in some way. Scripts of that nature might be formal or informal, explicit or tacit, and we might follow them consciously or unconsciously. Scripts shape the ways in which people behave, influencing what we might expect people to do in a scenario as the action plays out.

As James Bach says in the comments to our blog post Exploratory Testing 3.0, “By ‘script’ we are speaking of any control system or factor that influences your testing and lies outside of your realm of choice (even temporarily). This does not refer only to specific instructions you are given and that you must follow. Your biases script you. Your ignorance scripts you. Your organization’s culture scripts you. The choices you make and never revisit script you.” (my emphasis, there)

When I’m driving to a party out in the country, the list of directions that I got from the host scripts me. Many other things script me too. The starting time of the party—combined with cultural norms that establish whether I should be very prompt or fashionably late—prompts me to leave home at a certain time. The traffic laws and the local driving culture condition my behaviour and my interactions with other people on the road. The marked detour along the route scripts me, as do the weather and the driving conditions. My temperament and my current emotional state script me too. In this more general sense of “scripting”, any activity can become heavily scripted, even if it isn’t written down in a formal way.

Scripts are not universally bad things, of course. They often provide compelling advantages. Scripts can save cognitive effort; the more my behaviour is scripted, the less I have to think, do research, make choices, or get confused. In my driving example, a certain degree of scripting helps me to get where I’m going, to get along with other drivers, and to avoid certain kinds of trouble. Still, if I want to get to the party without harm to myself or other people, I must bring my own agency to the task and stay vigilant, present, and attentive, making conscious and intentional choices. Scripts might influence my choices, and may even help me make better choices, but they should not control me; I must remain in control. Following a script means giving up engagement and responsibility for that part of the action.

From time to time, testing might include formal testing—testing that must be done in a specific way, or to check specific facts. On those occasions, formal scripting—especially the kind of formal script followed by a machine—might be a reasonable approach enabling certain kinds of tasks and managing them successfully. A highly scripted approach could be helpful for rote activities like operating the product following explicitly declared steps and then checking for specific outputs. A highly scripted approach might also enable or extend certain kinds of variation—randomizing data, for example. But there are many other activities in testing: learning about the product, designing a test strategy, interviewing a domain expert, recognizing a new risk, investigating a bug—and dealing with problems in formally scripted activities. In those cases, variability and adaptation are essential, and an overly formal approach is likely to be damaging, time-consuming, or outright impossible. Here’s something else that is almost never formally scripted: the behaviour of normal people using software.

Notice on the one hand that formal testing is, by its nature, highly scripted; most of the time, scripting constrains or even prevents exploration by constraining variation. On the other hand, if you want to make really good decisions about what to test formally, how to test formally, why to test formally, it helps enormously to learn about the product in unscripted and informal ways: conversation, experimentation, investigation… So excellent scripted testing and excellent checking are rooted in exploratory work. They begin with exploratory work and depend on exploratory work. To use language as Harry Collins might, scripted testing is parasitic on exploration.

We say that any testing worthy of the name is fundamentally exploratory. We say that to test a product means to evaluate it by learning about it through experimentation and exploration. To explore a product means to investigate it, to examine it, to create and travel over maps and models of it. Testing includes studying the product, modeling it, questioning it, making inferences about it, operating it, observing it. Testing includes reporting, which itself includes choosing what to report and how to contextualize it. We believe these activities cannot be encoded in explicit procedural scripting in the narrow sense that I mentioned earlier, even though they are all scripted to some degree in the more general sense. Excellent testing—excellent learning—requires us to think and to make choices, which includes thinking about what might be scripting us, and deciding whether to control those scripts or to be controlled by them. We must remain aware of the factors that are scripting us so that we can manage them, taking advantage of them when they help and resisting them when they interfere with our mission.

Very Short Blog Posts (5): Understanding the Requirements

Monday, October 28th, 2013

People often suggest that “understanding the requirements” is an essential step before you can begin testing. This may be true for checking or formal testing—examining a product in a specific way, or to check specific facts. But understanding the requirements is not a necessary precursor to testing, which is learning about a product through experimentation (a larger activity which might include checking) and creating the conditions to make that activity possible. Indeed, you may need to test in order to develop an understanding of the requirements, which in turn triggers more and better testing, yielding even better understanding of the requirements—and so on.

More generally, when thinking about testing, think more about loops, and less about lines.

Heuristics for Understanding Heuristics

Friday, April 20th, 2012

This conversation is fictitious, but it’s also representative of several chats that I’ve had with testers over the last few weeks.

Tony, a tester friend, approached me recently, and told me that he was having trouble understanding heuristics and oracles. I have a heuristic approach for solving the problem of people not understanding a word:

Give ’em a definition.

So, I told him:

A heuristic is a fallible method for solving a problem or making a decision.

After I tried the “Give ’em a definition” heuristic, I tested to see if Tony seemed to understand. His eyes were a little glazed over. I applied a heuristic for making the decision, did he get it?

When someone’s eyes glaze over, they don’t get it.

Heuristics aren’t guaranteed to work. For example, sometimes the general “Give ’em a definition” heuristic solves the problem of people not understanding something, and sometimes it doesn’t. In the latter case, I apply another heuristic:

Give ’em an explanation.

So I told him:

“When you know how to solve a problem, you might follow a rule. When you’re not so sure about how to solve the problem, following a rule won’t help you. Not knowing how to solve a problem means not knowing which rule to apply, or whether there’s a rule at all. When you’re in uncertain conditions, or dealing with imperfect or incomplete information, you apply heuristics—methods that might work, or that might fail.

“As an adjective, ‘heuristic’ means ‘serving to discover’ or ‘helping to learn’. When Archimedes realized that things that sink displace their volume of water, and things that float displace their mass, he ran naked through the streets of Athens yelling, ‘Eureka!’ or ‘I’ve discovered it!’ ‘Eureka’ and ‘heuristic’ come from the same root word in Greek.

Tony was listening thoughtfully, but his brow was still furrowed. So I applied another teaching heuristic:

Give ’em something to compare.

I said, “Here’s one way of understanding heuristics: compare ‘heuristic’ with ‘algorithm’. An algorithm is a method for solving a problem that’s guaranteed to have a right answer. So an algorithm is like a rule that you follow; a heuristic is like a rule of thumb that you apply. Rules of thumb usually work, but not always.”

Sometimes providing a comparable idea solves the problem of understanding something, and sometimes it doesn’t. Tony nodded, but still looked a little puzzled. I wasn’t sure I had solved the problem, so I applied a new heuristic:

Point ’em to a book.

I suggested that he read George Polya’s book How to Solve It. “In that book, Polya presents a set of ideas and questions you can ask yourself that can help you to solve math problems.”

“Wait… I thought you always solved math problems with algorithms,” Tony said.

“That’s when you know how to solve the problem. When you don’t, Polya’s suggestions—heuristics—can get you started. They don’t always work, but they tend to be pretty powerful, and when one doesn’t work, you try another one. You never know which questions or ideas will help you solve the problem most quickly. So you practice this cycle: apply a heuristic, and if you’re still stuck, try another one. After a while, you develop judgement and skill, which is what you need to apply heuristics well. Polya talks about that a lot. He also emphasizes just how much heuristics are fallible and context-dependent.”

Mind you, neither Tony nor I had a copy of Polya’s book right handy, and Tony wanted to understand “heuristics” better now. The “point ’em to a book” heuristic had failed this time, even though it might have worked in a different context. So I tried yet another heuristic to solve the problem:

Point ’em to another book.

I suggested that he read Gut Feelings by Gerd Gigerenzer. “In that book, Gigerenzer emphasizes that heuristics tend to be fast and frugal (that is, quick and inexpensive). That’s important, he says: humans need heuristics because they’re typically dealing with bounded rationality.”

Uh-oh. Tony’s eyes had glazed over again at the mention of “bounded rationality”. So I applied a heuristic:

Even when it’s a deep concept, a fast and frugal explanation might do.

After all, Polya says that a heuristic isn’t intended to be perfect. Instead, heuristics are provisional and context-dependent. So in order to provide a quick understanding of “bounded rationality”, I said, “In a nutshell, bounded rationality is a situation when you have incomplete knowledge, imperfect understanding, and limited time.”

He grinned, and said, “What, like when you’re testing? Like most of the time in life?”

“Yes. Billy Vaughan Koen, in another book, Discussion of the Method, says that the engineering method is ‘to cause the best change in a poorly understood situation within the available resources.'”

“So he’s saying that engineers apply heuristics?” Tony asked. “I guess that makes sense, since engineers solve problems in ways that usually work, but sometimes there are failures.”

He seemed to be getting it. But I wanted to test that, so I applied a heuristic for making the decision, “Does he get it?

Ask the student to provide an example.

So I said, “I think you might have it. But can you provide me with an example of a heuristic?”

He said, “Okay. I think so.” He paused. “Here’s a heuristic for solving the problem of opening a door: ‘Pull on the handle; push on the plate.’ That’s what you do when you get to a door, right? It’s a heuristic that usually works. Well… it might fail. It could be one of those annoying doors that have handles on both sides, where you have to push the handle or pull the handle to open the door. It might be one of those doors that opens both ways, like the doors for restaurant kitchens, so there’s no handle. The door might not even have a handle or a plate; it might have a knob. In that case, you apply another heuristic: ‘Turn the knob’. That’s a solution for the problem of opening a door that doesn’t have a handle or a plate. But that heuristic might fail too. The door might be locked, even though the knob turns. It might be one of those fancy doors that have dead-bolt locks and knobs that don’t turn. It might not have a knob at all; it might have one of those old-fashioned latches. So none of those heuristics guarantees a solution, but each one might help to solve the problem of getting through the door.”

“Great! I think you’ve got it.”

“To be precise about it,” he said, “you can’t be sure, so you’re applying heuristics that help you to make the decision that I get it.”

I laughed. “Right. So what’s the difference,” I asked, “between an oracle and a heuristic?”

He paused.

(to be continued…)

Confusion as an Oracle

Monday, October 17th, 2011

A couple of weeks back, Sylvia Killinen (@skillinen on Twitter) tweeted:

“Seems to me that much of #testing relies on noticing when one is confused rather than accepting it as Something Computer Programs Do.”

That’s a beautiful observation, near and dear to my heart since 2007 at least. The night I read Sylvia’s tweet, I wanted to blog more on the subject, but sometimes blog posts go in a different direction from where I intend them to go. At the time, I went here. And now I’m back.

Sylvia’s tweet reminded me of a story that Jon Bach tells about learning testing with his brother James. Jon had been working in a number of less-than-prestigious jobs. James suggested that Jon become a tester, and offered to train him how to be an excellent tester. Jon agreed to the idea. Everything went fine for the first couple of weeks, but one day Jon slumped into James’ office looking dejected and demoralized. The conversation went something like this.

“What’s the problem?” asked James.

“I dunno,” said Jon. “I don’t think this whole becoming-a-tester thing is going to work out.”

“Not work out? But you’re doing great!” said James.

“Well, it might look that way to you, but…” Jon paused.

“So what’s the problem?”

“Well, you gave me this program to test,” Jon began. “But I’m just so confused.”

James peered over his glasses. “When you’re confused,” he said, “that’s a sign that there’s something confusing going on. I gave you a confusing product to test. Confusion might not be fun, but it’s a natural consequence when you’re dealing with a confusing product.” James was tacitly suggesting that Jon’s confusion cound be used as an oracle—a heuristic principle or mechanism by which we recognize a problem.

This little story suggests and emphasizes a number of serious and important points.

As I mentioned before, here, feelings don’t tell us what they’re about. Confusion doesn’t come with an arrow that points directly back to its source. Jon felt confused, and thought that the confusion was about him. But that confusion wasn’t just about Jon’s internal state; it was also about the state of the product and how Jon felt about it. Feelings—internal, non-specific and ambiguous—don’t tell us what’s going on; they tell us to pay attention to what’s happening around us. When you’re a novice, you might be inclined to believe that your feelings are telling about yourself, but that’s likely not the whole story, since emotions don’t happen in isolation from everything else. It’s more probable that feelings are telling you about the relationship between you and something else, or someone else, or the situation.

Which reminds me of another story. It happened at Jerry Weinberg’s Problem Solving Leadership workshop in 2008. PSL is full of challenging and rich and tricky exercises, and one day, one team had fallen into a couple of traps and had done rather badly. During the debrief, Jerry remarked on it. “You guys handled a much harder problem than this yesterday, you know. What happened this time?”

One of the participants answered, “The problem screwed us up.”

With only the briefest pause, Jerry peered at the fellow and replied in a gently admonishing way, “Your reaction to the complexity of the problem screwed you up.”

Methodologists and process enthusiasts regularly ignore the complex human and emotional aspects of testing, and so don’t take them into account or use them as a resource. Some actively reject feelings as a rich source of information. One colleague reports that she told her boss about a presentation of mine in which I had discussed the role of emotions in software testing.

“There’s no role for emotions in software testing,” he said quietly.

“I’m not sure I agree,” she said. “I think there might be. I think at least it’s worth considering.”

Abruptly he shouted, “THERE’S NO ROLE FOR EMOTIONS IN SOFTWARE TESTING!”

She remarked he had seemed agitated—a strange reaction considered the mismatch between what he was saying and what he appeared to be feeling. What more might we learn by noting his feelings and considering possible interpretations? What inferences might we draw about the differences between his reaction and hers?

As we increasingly emphasize in the Rapid Software Testing course, recognizing and dealing with your feelings is a key self-management skill. Indeed, for testers, feelings are a kind of first-order measurement. It’s okay to be confused. The confusion is valuable and even desirable if it leads you to the right control action, which is to investigate what your emotions might be telling you and why. If we’re willing to acknowledge our feelings, we can use them productively as cues to start looking for oracles and problems in the product that trigger the feelings—before those problems lead our customers to distress.

In my article Testing Without a Map, I discuss some of the oracles that we present in the Rapid Software Testing class and methodology.

Thanks to Sylvia for the inspiration.

I’ll be bringing Rapid Testing to the Netherlands (October 31-November 2), London (November 28-30), and Oslo (December 14-16). See the right-hand panel for registration details. Join us! Spread the word! Thank you!

Testing: Difficult or Time-Consuming?

Thursday, September 29th, 2011

In my recent blog post, Testing Problems Are Test Results, I noted a question that we might ask about people’s perceptions of testing itself:

Does someone perceive testing to be difficult or time-consuming? Who? What’s the basis for that perception? What assumptions underlie it?

The answer to that question may provide important clues to the way people think about testing, which in turn influences the cost and value of testing.

As an example, an pseudonymous person (“PM Hut”) who is evidently associated with project management in some sense (s/he provides the URL http://www.pmhut.com) answered my questions above.

Just to answer your question “Does someone perceive testing to be difficult or time-consuming?” Yes, everyone, I can’t think of a single team member I have managed who doesn’t think that testing is time consuming, and they’d rather do something else.

This, alas, isn’t an unusual response. To someone like me who offers help in increasing the value and reducing the cost of testing, it triggers some questions that might prompt reframes or further questions.

  • What do the team members think testing is? Do they think that it’s something ancillary to the project, rather than an essential and integrated aspect of software development? To me, testing is about gathering information and raising awareness that’s essential for identifying product risks and steering the project. That’s incredibly important and valuable.

    So when the team members are driving a car, do they perceive looking out the windshield to be difficult or time-consuming? Do they perceive looking at the dashboard to be difficult or time-consuming? If so, why? What are the differences between the way they obtain awareness when they’re driving a car, versus the way they obtain awareness when they’re contributing to the development of a product or service?

  • Do the team members think testing is the mindless repetition of actions and observation of specific outputs, as prescribed by someone else? If so, I’d agree with them that testing is an unpalatable activity—except I don’t call that testing. I call it checking, and I’d rather let a machine do it. I’d also ask if checking is being done automatically by the programmers at lower levels where it tends to be fast, cheap, easy, useful and timely—or manually at higher levels, where it tends to be slower, more expensive, more difficult, less useful, and less timely—and tedious?
  • Is testing focused mostly on confirmation of things that we already know or hope to be true? Is it mostly focused on the functional aspects of the program (which are amenable to checking)? People tend to find this dull and tedious, and rightly so. Or is testing an active search for new information, problems, and risks? Does it include focus on parafunctional aspects of the product—the things that provide important perceptions of real value to real people? Are the testers given the freedom and responsibility to manage a good deal of their own investigation? Testers tend to find this kind of approach a lot more engaging and a lot more interesting, and the results are typically more wide-ranging, informative, and valuable to programmers and managers.
  • Is testing overburdened by meaningless and valueless paperwork, bureaucracy, and administrivia? How did that come to pass? Are team members aware that there are simple, lightweight, rapid, and highly effective ways of planning, recording, and reporting testing work and project status?
  • Are there political issues? Are testers (or people acting temporarily in a testing role) routinely blown off (as in this example)? Are the nuggets of information revealed by testing habitually dismissed? Is that because testing is revealing trivial information? If so, is there a problem with specific testing skills like modeling the test space, determining coverage, determining oracles, recording, or reporting?
  • Have people been trained on the basis of testing as a skilled, sophisticated thinking art? Or is testing something for which capability can be assessed by a trivial, 40-question multiple choice exam?
  • If testing is being done well (which given people’s attitudes expressed above would be a surprise), are programmers or managers afraid of having to deal with the information that testing reveals? Does that lead to recrimination and conflict?
  • If there’s a perception that testing is by its nature dull and slow, are the testers aware of the quick testing approaches in our Rapid Software Testing class (PDF, page 97-99) , in the Black Box Software Testing course offered by the Association for Software Testing, or in James Whittaker’s How to Break Software? Has anyone read and absorbed Lessons Learned in Software Testing?
  • If there’s a perception that technical reviews are slow, have the testers, programmers, or managers read Perfect Software and Other Illusions About Testing? Do they recognize the ways in which careful observation provides us with “instant reviews” (see Perfect Software, page 143)? Has anyone on the team read any other of Jerry Weinberg’s books on software management and measurement?
  • Have the testers, programmers, and managers recognized the extent to which exploratory testing is going on all the time? Do they recognize that issues revealed by testing might be even more important than bugs? Do they understand that every test result and every testing problem points to meta-information that can be extremely valuable in managing the project?

On PM Hut’s own Web site, there’s an article entitled “Why Project Managers Fail“. The author, Jim Benson, lists five common problems, each of which could be quickly revealed by looking at testing as a source of information, rather than by simply going through the motions. Take it from the former program manager of a product that, in its day, was the best-selling piece of commercial software in the world: testers, testing, and the information they reveal are a project manager’s best friends and most valuable assets—when you have the awareness to recognize them.

Testing need not be difficult, tedious or time-consuming. A perception that it is so, or that it must be so, suggests a problem with testing as practised or testing as perceived. Astute managers and teams will investigate that important and largely mistaken perception.

At Least Three Good Reasons for Testers to Learn to Program

Tuesday, September 20th, 2011

There is a common claim, especially in the Agile community, that suggests that all testers should be able to write programs. I don’t think that’s the case. In the Rapid Software Testing class, James Bach and I say that testing is “questioning a product in order to evaluate it”. That’s the short form of my personal definition of testing, “investigation of people, software, computers, and related goods and services, and the relationships between them”. Most people who use computer programs are not computer programmers, and there are many ways in which a tester can question and investigate a product without programming.

Yet there are at least three very good reasons why it might be a good idea for a tester to learn to program.

Tooling. Computer programs extend our capacity to sense what’s happening, to make decisions, and to perform actions. There are many wonderful packaged tools, in dozens of categories, available to us testers. Every tool offers some level of control through a variety of affordances. Some provide restrictive controls, like text fields or drop-down lists or check boxes. Other tools provide macros so that we can string together sequences of actions. Some tools come with full-blown scripting languages that provide the capacity for the tester to sense, decide, and act through the tool in very flexible and specific ways. There are also, of course, general-purpose programming and scripting languages in their own right. When we can use programming concepts and programming languages, we have a more powerful and adaptable tool set for our investigations.

Insight. When we learn to program, we develop understanding about the elements and construction of programs and the computers on which they run. We learn how data is represented inside the computer, and how bits can be interpreted and misinterpreted. We learn about flow control, decision points, looping, and branching—and how mistakes can be made. We might even be able to read the source code of the programs that we’re testing, which can be valuable in review, troubleshooting, or debugging. Even if we never see our programs’ source code, when we learn about how programs work, we gain insight into how they might not work.

Humility. Dan Spear, a great programmer at Quarterdeck and a good friend, once pointed out to me that programming a computer is one of the most humbling experiences available to us. When you program a computer to perform some task, it will reflect—often immediately and often dramatically—any imprecision in your instructions and your underlying ideas. When we learn to program, we get insight not only into how a program works, but also into how difficult programming can be. This should trigger respect for programmers, but it should also trigger something else: empathy, which—as Jerry Weinberg says—is the first and most important aspect of the emotional set and setting for the tester.

The Best Tour

Thursday, June 30th, 2011

Cem Kaner recently wrote a reply to my blog post Of Testing Tours and Dashboards. One way to address the best practice issue is to go back to the metaphor and ask “What would be the best tour of London?” That question should give rise to plenty of other questions.

  • Are you touring for your own purposes, or in support of someone else’s interests? To what degree are other people interested in what you learn on the tour? Are you working for them? Who are they? Might they be a travel agency? A cultural organization? A newspaper? A food and travel show on TV? The history department of a university? What’s your information objective? Does the client want quick, practical, or deep questions answered? What’s your budget?
  • How well do you know London already?  How much would you like to leave open the possibility of new discoveries?  What maps or books or other documentation do you have to help to guide or structure your tour?  Is updating those documents part of your purpose?
  • Is someone else guiding your tour? What’s their reputation? To what extent do you know and trust them? Are they going to allow you the opportunity and the time to follow your own lights and explore, or do they have a very strict itinerary for you to follow? What might you see—or miss—as a result?
  • Are you traveling with other people? What are they interested in? To what degree do you share your discovery and learning?
  • How would you prefer to get around? By Tube, to get around quickly? By a London Taxi (which includes some interesting information from the cabbie? By bus, so you can see things from the top deck? On foot? By tour bus, where someone else is doing all the driving and all the guiding (that’s scripted touring)?
  • What do you need to bring with you? Notepad? Computer? Mobile phone? Still camera? Video camera? Umbrella? Sunscreen? (It’s London; you’ll probably need the umbrella.)
  • How much time do you have available?   An afternoon?  A day?  A few days? A week?  A month?
  • What are you (or your clients) interested in? Historical sites? Art galleries? Food? Museums? Architecture? Churches? Shopping? How focused do you want your tour to be? Very specialized, or a little of this and a little of that? What do you consider “in London”, and what’s outside of it?
  • How are you going to organize your time? How are you going to account for time spent in active investigation and research versus moving from place to place, breaks, and eating? How are you going to budget time to collect your findings, structure and summarize your experience, and present a report?
  • How do you want to record your tour? If you’re working for a client, what kind of report do they want? A conversation? Written descriptions? Pictures? Do they want things in a specific format?

(Note, by the way, that these questions are largely structured around the CIDTESTD guidewords in the Heuristic Test Strategy Model (Customer, Information, Developer Relations, Equipment and Tools, Schedule, Test Item, and Deliverables)—and that there are context-specific questions that we can add as we model and explore the mission space and the testing assignment.)

There is no best tour of London; they have their strengths and weaknesses. Reasonable people who think about it for a moment realize that the “best” tour of London is a) relative to some person; b) relative to that person’s purposes and interests; c) relative to what the person already knows; d) relative to the amount of time available.  And such a reasonable person would be able to apply that metaphor to software testing tours too.