Blog Posts for the ‘Time’ Category

Breaking the Test Case Addiction (Part 8)

Monday, December 9th, 2019

Throughout this series, we’ve been looking at an an alternative to artifact-based approaches to performing and accounting for testing: an activity-based approach.

Frieda, my coaching client, and I had been discussing how to manage testing without dependence on formalized, scripted, procedural test cases. Part of any approach to making work accountable is communication between a manager or test lead and the person who had done the work. In session-based test management, one part of this communication is a conversation that we call a debrief, and that’s what we talked about last time.

One of the important elements of a debrief is accounting for the time spent on the work. And that’s why one of the most important questions in the debrief is What did you spend your time doing in this session?

“Ummm… That would be ‘testing’, presumably, wouldn’t it?” Frieda asked.

“Well,” I replied, “there’s testing, and then there’s other work that happens in the session. And there are pretty much inevitably interruptions of some kind.”

“For sure,” Frieda agreed. “I’m getting interrupted every day, all the time: instant messages, phone calls, other testers asking me for help, programmers claiming they can’t reproduce the bug on their machines…”

“Interruptions are a Thing, for sure,” I said. “Let’s talk about those in a bit. First, though, let’s consider what you’d be doing during a testing session in which you weren’t interrupted. Or if we didn’t talk about the interruptions, for a moment. What would you be doing?”

“Testing. Performing tests. Looking for bugs,” said Frieda.

“Right. Can you go deeper? More specific?”

“OK. I’d be learning about the product, exercising test conditions, increasing test coverage. I’d be keeping notes. If I were making a mind map, I’d be adding to it, filling in the empty areas where I hadn’t been before. Each bit of testing I performed would add to coverage.”

“‘Each bit of testing,'”, I repeated. “All right; let’s imagine that you set up a 90-minute session where you could be uninterrupted. Lock the office door…”

“…the one that I don’t have…”, Frieda said.

“Natch. It’s cubicle-land where you work. But let’s say you put up a sign that said “Do not disturb! Testing is in Session!” Set the phone to Send Calls, shut off Slack and Skype and iMessage and what-all… In that session, let’s just say that you could do a bunch of two-minute tests, and with each one of those tests, you could learn something specific about the product.”

“That’s not how testing really works! That sounds like… test cases!” Frieda said.

“I know,” I grinned. “You’re right. I agree. But let’s suspend that objection for a bit while we work through this. Imagine that 90-minute session rendered as a nine-by-five table of 45 little microbursts of test activity. The kind of manager that you’ve been role-playing here thinks this will happen.”

A Manager's Fantasy of an Ideal Test Session

Frieda chuckled. “Manager’s Fantasy Edition. That’s about right.”

“Indeed,” I said. “But why?”

“Well, obviously, when I’m testing, I find bugs. When I do, start investigating. I start figuring out how to reproduce the bug, so I can write it up. And then I write it up.”

“Right,” I said. “But even though it’s part of testing, it’s got a different flavour than the learning-focused stuff, doesn’t it?”

“Definitely,” said Frieda. ” When I find a bug, I’m not covering new territory. It’s like I’m not adding to the map I’m making of the product. It’s more like I’m staying in the same place for a while while I investigate.”

“Is that a good thing to do?”

“Well…, yes,” Frieda replied. “Obviously. Investigating bugs is a big part of my job.”

“Right. And it takes time. How much?”

“Well,” Frieda began, “A lot of the time I repeat the test to make sure I’m really seeing a bug. Then I try to find out how to reproduce it reliably, in some minimum set of steps, or with some particular data. Sometimes I try some variations to see if I can find other problems around that problem. Then I’ve got to turn all that into a bug report, and log it in the tracking system. Even if I don’t write it up formally, I have to talk to the developer about it.”

“So, quite a bit of time,” I said.

“Yep,” she said. “And another thing: some bugs block me and prevent me from getting to part of the product I want to test. Trying to work around the blockers takes time too. So… like I said, while I’m doing all those things, I’m not covering new ground. It’s like being stuck in the mud on a flooded road.”

“If I were your manager, and if I were concerned about your productivity, I’d want to know about stuff like that,” I said. That’s why, in session-based test management, we keep track of several kinds of testing time. Let’s start with two: test design and execution, in which we’re performing tests, learning about the product, gaining a better understanding of it. Of course, our focus is on activity that will either find a bug, or help us to find a bug. We call that T-time, for short, and distinguish it from bug investigation and reporting—B-time—which includes the stuff that you were just talking about. The key thing is that B-time interrupts T-time.”

Frieda’s brow furrowed. “Or, to put it another way, investigating bugs reduces test coverage.”

“Yes. And when it does, it’s important for managers to know about it. As a manager, I don’t want to be fooled about coverage—that is, how much of the product that we’ve examined with respect to some model.

“You start a session with a charter that’s intended to cover something we want to know about. In a 90-minute session, it’s one thing if a tester spends 80 minutes covering some product area with testing and only ten minutes investigating bugs. It’s a completely different thing if the tester spends 80 minutes investigating bugs, and only ten minutes on tests that produced new coverage. If you only spend ten percent of the time addressing the charter, and the rest on investigating a bug that you’ve found, I’d hope you’d report that you hadn’t accomplished your charter.”

“Wait… what if I were nervous about that?” Frieda asked. “Doesn’t it look bad if I haven’t achieved the goal for the session?”

“Not necessarily,” I replied. “We can have the best of intentions and aspirations for a session before it starts But the product is what it is, and whatever happens, happens. Whatever the charter suggests, there’s an overarching mission for every session: investigate the product and report on the problems in it. If you’re having to report lots of bugs because they’re there, and you’re doing it efficiently, that shouldn’t be held against you. Testers don’t put the bugs in. If there are problems to report, that takes time, and that’s going to reduce coverage time. If you’re finding and investigate a lot of bugs, there’s no shame in not covering what we might hope you’d cover. Plus, bug investigation helps the developers to understand what they’re dealing with, so that’s a service to the team.”

Frieda looked concerned. “Not very managers I’ve worked with would understand that. They’d just say, ‘Finish the test cases!’ and be done with it.”

“That can be an issue, for sure. But a key part of testing work these days is to help managers to learn how to become good clients for testing. That sometimes means spelling out certain things explicitly. For instance: if you find a ton of bugs in during in a session, that’s bad enough, in that you’ve got a lot less than a session’s worth of test coverage. But there’s something that might be even worse on top of that: you have found only the shallowest bugs By definition; the bugs you’ve found already were the easiest bugs to find. A swarm of shallow bugs is often associated with an infestation of deeper bugs.”

“So, in that situation, I’m going to need a few more sessions to obtain the coverage we intended to achieve with the first one,” said Frieda.

“Right. And you if you’re concerned about risk, you’ll may want to charter more, deeper testing sessions, because—again, by definition—deeper bugs are harder to find.”

Frieda paused. “You said there were several kinds of testing time. You mentioned T-time and B-time. That’s only two.”

“Yes. At very least, there’s also Setup time, S-time. While you’re setting up for a test, you aren’t obtaining coverage, and you’re not investigating or reporting a bug. Actually, setting up is only one thing covered our notion of “Setup”. S-time is a kind of catch-all for time within the session in which you couldn’t have found a bug. Maybe you’re configuring the product or some tool; maybe you’re resetting the system after a problem; maybe you’re tidying up your notes.”

“Or reading about the product? Or talking with somebody about it?”, Frieda asked.

“Right. Anything that’s necessary to get the work done, but that isn’t T-time or B-time. So instead of that Manager’s Fantasy Version of the session, a real session often looks like this:”

A More Plausible Test Session

“Or even this.”

A Common Test Session

“Wow,” said Frieda. “I mean, that second one is totally realistic to me. And look at how little gets covered, and how much doesn’t get covered.”

“Yeah. When we visualize like this, makes an impression, doesn’t it? Trouble is, not very many testers help managers connect those dots. As you said, if you want to achieve the coverage that the manager hoped for in the Fantasy Edition, this helps to show that you’ll need something like four sessions to get it, not just one. Plus the bugs that you’ve found in that one session are by definition the shallowest bugs, the ones closest to the surface. Hidden, rare, subtle, intermittent, emergent bugs… they’re deeper.”

Frieda still had a few more questions, which we’ll get to next time.

Very Short Blog Posts (27): Saving Time

Wednesday, April 29th, 2015

Instead of studying and learning from every bug, you can save a lot of time by counting and aggregating bug reports.

That’s a good thing in its way, because if you don’t study and learn from every bug, you’ll need all the time you can get to deal with problems that seem to keep happening over and over again.

Very Short Blog Posts (1): “We Don’t Have Time for Testing”

Sunday, September 8th, 2013

When someone says “We don’t have time for testing”, try translating that into “We don’t have time to think critically about the product, to experiment with it, and to learn about ways in which it might fail.” Then ask if people feel okay about that.

Where Does All That Time Go?

Tuesday, October 30th, 2012

It had been a long day, so a few of the fellows from the class agreed to meet a restaurant downtown. The main courses had been cleared off the table, some beer had been delivered, and we were waiting for dessert. Pedro (not his real name) was complaining, again, about how much time he had to spend doing administrivial tasks—meetings, filling out forms, time sheets, requisitions, and the like. “Everything takes so long. I want a pad of paper to take notes, I have to fill out a form for it. God help me if I run out of forms!”

“How much time do you spend on this kind of stuff each week?” I asked.

Pedro replied, “An hour a day. Maybe two, some days. Meetings…let’s say an hour and a half, on average.”

Wow, I thought—that’s a pretty good chunk of the week. I had an idea.

“Let’s visualize this, I said.” I took out my trusty Moleskine notebook. I prefer the version with the graph paper in it, for occasions just like this one. I outlined a grid, 20 squares across by two down.

Empty Week

“So you spend, on average, an hour and a half each day on compliance stuff. One-point-five times five, or 7.5 hours a week. Let’s make it eight. Put a C in eight squares.” He did that.

Compliance

“Okay,” I said. “You were griping today about how much time you spend wrestling with your test environments.”

Pedro’s eyes lit up. “Yes!” he said. “That’s the big one. See, it’s mobile stuff. We have a server component and a handset component to what we do, and the server stuff is a real bear.”

“Tell me more.”

“It’s a big deal. We’ve got one environment that models the production system. The software we’re developing has been so buggy that we can’t tell whether a given problem is general, or specific to the handset, so we have another one that we set up to do targeted testing every time we add support for a new handset. That’s the one I work with. Trouble is, setting it up takes ages and it’s really finicky. I have to do everything really carefully. I’ve asked for time to do scripting to automate some of it, but they won’t give that to me, because they’re always in such a rush. So, I do it by hand. It’s buggy, and I make the odd mistake. Either way, when I find out it doesn’t work, I have to troubleshoot it. That means I have to get on instant messaging or the phone to the developers, and figure out what’s wrong; then I have to figure out where to roll back to. And usually that’s right from the start. It wastes hours. And it’s every day.”

“Okay. Show me that on our little table, here. Use an S to represent each hour your spend each day.”

Whereupon Pedro proceded to fill in squares. Ten of them. Ten more. And then, eight more.

Setup

“Really?!” I said. “28 hours a week divided by five days—that’s more than five hours a day. Seriously?”

“Totally,” said Pedro. “It’s most of the day, every day, honestly. Never mind the tedium. What’s really killing me is that I don’t feel like I’m getting any real testing work done.”

“No kidding. There’s no time for it. There are only four squares left in the week. Plus, something you said earlier today about tons of bugs that aren’t related to setting up?”

“Right. When it comes to the stuff that I’m actually being asked to test, there’s lots of bugs there too. So my ‘testing time’ isn’t really testing. It’s mostly taken up with trying to reproduce and document the bugs.”

“Yes. In session-based test management, that’s bug investigation and reporting—B-time. And it does interrupt test design and execution—T-time—which is what produces actual test coverage, learning about what’s actually going on in the product. So, how much B-time?” He filled in three of the squares with Bs.

Bug Investigation and Reporting

“And T-time?”

He had room left to put in one lonely little T in the lower right corner.

Testing Time

“Wow,” I laughed. “One-fortieth of your whole week is spent in getting actual test coverage. The rest is all overhead. Have you told them how it affects you?”

“I’ve mentioned it,” he said.

“So look at this,” I suggested. “It’s even more clear when we use colour for emphasis.”

With Colour

“Whoa. I never looked at it that way. And then,” he paused. “Then they ask me, ‘Why didn’t you find that bug?'”

“Well,” I said, “considering the illusion they’re probably working under, it’s not an unreasonable question.”

“What do you mean?” Pedro asked.

“What does it say on your business card?”

“‘Software Testing’.”

“And what does it say on the door of the the test lab?”

“‘Test Lab’,” said Pedro.

“And they call you…?”

“Pedro.”

“No,” I laughed. “They say you’re a… what?”

“Oh. A tester.”

“So since you’re a tester, and since the door on the test lab says ‘Test Lab’, and your business card says ‘Testing’, they figure that’s all you do. The illusion is what Jerry Weinberg calls the Lumping Problem. All of those different activities—administrative compliance, setup, bug investigation and reporting, and test design and execution—are lumped into a single idea for them.” And I drew it for him.

Management's Dream

“That’s management’s illusion, there. Since, in their imagination, you’ve got forty hours of testing time in a week, it’s not unreasonable for them to wonder why you didn’t find that bug.”

“Hmmm. Right,” said Pedro.

“When in fact, what they’re getting from you is this.” And I drew it for him.

Testing Reality

“For testing—actual interaction with the product, looking for problems, you’ve got one-fortieth of the time they think you’ve got. One lonely little T. Is that part of your test report?”

“Oy,” he said. “Maybe I should show them something like this.”

“Maybe you should,” I said.

A couple of nights later, I showed that page of my notebook to James Bach over Skype. “Wow,” he said. “That guy could be forty times more productive!”

“Forty?”

“Well, no, not really, of course. But suppose the programmers checked their work a little more carefully, or suppose the testers practiced writing more concise bug reports and sharpened their investigating skill. One of those two things could cut the bug investigation time by a third. That would give more time for testing, when they’re not being interrupted by other stuff. What if they cut the setup time by a half, and that administrivia by half?”

“Four, fourteen…” I said. “That would give eighteen more hours for testing and bug investigation, for a total of 22 hours. And even if they’re still doing two hours of bug investigation for every one hour of testing time… well, that’s seven times more productive, at least.”

“Seven times the test coverage if they get some of those issues worked out, then,” said James.

“Maybe de-lumping is the kind of thing lots of testers would want to do in their test reports,” I said.

How about you?

Time, Coverage, and Maps

Monday, October 15th, 2012

Over the last few years, people have become increasingly enthusiastic about the idea of mind mapping to help them describe or illustrate or otherwise consider test coverage. For me, Darren McMillan was the one who really got the ball rolling here, here, and here. More recently there have been other examples to present coverage ideas. Colleague Adam Goucher has weighed in here. But there’s another thing you can do, something that James Bach and I have been talking about in the Rapid Software Testing class for a couple of years now: You can use a mind map to help decide about how might allocate your time when you’re dealing with an uncertain situation. You can do this with a functional or structural diagram, too. Let’s try this with an example.

  • There is a given number of hours in a typical week; let’s say 40.
  • There are some testers on the team; let’s say four.
  • Each tester can accomplish a certain amount of uninterrupted testing time in the course of a day. For this exercise, let’s say that it’s three 90-minute sessions per tester per day. That means that each tester could accomplish 15 sessions per week, so our team of four could pull off 60 sessions per week.

Now, most sessions are not entirely productive in terms of test coverage. That is, sessions are not typically dedicated entirely to on-charter test design and execution (that’s called that testing time or T time in session-based test management). T-time is regularly interrupted by other activities. Apart from test-design and execution, whereby we obtain test coverage, there’s usually some setup time (S-time), and there’s almost always some bug investigation and reporting time (B-time). We can’t predict how well any given session is going to go, but over time we can learn to develop a sort of first-order, back-of-the-envelope, finger-in-the-air, heuristic, probably-wrong-but-right-enough kind of guess. We are talking about predicting the future, here. Let’s say that between them, our general experience with this development group is that B-time and S-time tend to cost us about a third of our time as we’re testing.

So, in order to figure out how we’re going to spend our time this week, we can’t say that we’re going to get 60 sessions worth of test coverage. Our effective testing time is more like 40 idealized sessions. Let’s represent those sessions with sticky notes—one perfectly session per note. For the team we’re imagining here, we’d have 40 sticky notes to work with.

Different sessions usually have different themes—tasks, activities, or approaches. As we engage with a brand-new feature, we might perform an “intake” or “survey” or “reconnaissance” session, with the goal of identifying what’s there to be tested. “Analysis” sessions might help us to decide on where certain risks are, what we want to cover, or how we want to cover it. As we get deeper into the testing of a particular feature (“deep coverage” sessions), we might want a given session to be focused on a particular kind of test coverage—straightforward capability testing, data- or domain-focused testing, or testing on a specific platform. Maybe we want to cover a feature of the product while focusing on a particular parafunctional quality criterion, like performance or usability. Perhaps we want to allocate some sessions to design or coding of test oracles. Maybe we could dedicate a session or two to exploring the product based on problem reports from the help desk. If we’d like to highlight specific dimensions of activities or coverage, we can decorate our sticky notes with icons, one or two key words, or a dot——or we can use different colours for the notes, or some combination of these things. In this example, we’ll use little icons to represent classes of activities.

Now get the team together in front of a whiteboard or flip chart to look at your structural diagram, flowchart, or mind map. Place a sticky note (perhaps with a few words of explanatory text) on each node (functional area) or line (interface) on the map you’d like to cover with an idealized session. Keep putting sticky notes on the diagram until you’ve used them up.

By the time you’re out of sticky notes, you will have begun to develop some ideas about what you might or might not be able to accomplish given a week to do it. Are some areas not covered at all? Pick up a sticky note from somewhere else, and move it around. Should certain risky or complex areas receive more attention than others? If so, they might be worthy of more nodes and more than one sticky note. Not enough sticky notes—that is, not enough time, given the people you have available—to cover the whole diagram as well as you’d like? In that case, something has to change, but if all the assumptions above still hold, the catch is that you only have 40 sticky notes to work with and to redeploy.

There’s another catch, too. Diagrams are models. Models are simplifications of reality, and so they leave stuff out. In this kind of exercise, things that don’t appear on your diagrams can be easy to forget. Some essential aspects of test coverage might not fit very well on the diagram, or indeed on any diagram. As you notice missing items or missing ideas, put them on the diagram or on a list in one corner of the space. Keep asking what might be missing from the diagram or the list. Each element on the list is a potential candidate for its own sticky note—or maybe you can cover two or three list items within a single session.

Once the diagram has been covered with general ideas, we can choose to write a more specific and refined charter for each sticky note.

Maybe our sessions won’t be as productive as we thought. If, in the course of testing, we determine that our assumptions aren’t meshing well with reality, we can revisit the diagram and the sticky notes to re-evaluate as soon as we have any information that might threaten the schedule or the anticipated test coverage. We’d typically look at a new diagram, or look at an old diagram in a very different way, every week or two in any case.

This approach could be adapted to mesh very well with the ideas that Paul Holland outlines in this article.

Making things visible provides a point of departure for conversations about strategy, logistics, and timing. It’s important for us to have the skill of telling a story about what there is to test, how we could test it, what we could cover, and what our constraints might be. Some simple visual aids can help us to illustrate that story.

Braiding The Stories (Test Reporting Part 2)

Friday, February 24th, 2012

We were in the middle of a testing exercise at the Amplifying Your Effectiveness conference in 2005. I was assisting James Bach in a workshop that he was leading on testing. He presented the group with a mysterious application written by James Lyndsay—an early version of one of the Black Box Test Machines. “How many test cases would you need to test this application?” he asked.

Just then Jerry Weinberg wandered into the room. “Ah! Jerry Weinberg!” said James. “One of the greatest testing experts in the world! He’ll know the answer to this one. How many test cases would you need to test this application, Jerry?”

Jerry looked at the screen for a moment. “Three,” he said, firmly and decisively.

James knew to play along. “Three?!“, he said, in a feigned combination of amazement, uncertainty, and curiosity. “How do you know it’s three? Is it really three, Jerry?”

“Yes,” said Jerry. “Three.” He paused, and then said drily, “Why? Were you expecting some other number?”

In yesterday’s post, I was harshly critical of pass vs. fail ratios, a very problematic yet startlingly common way of estimating the state of the product and the project. When I point out the mischief of pass vs. fail ratios, some people object. “In the real world,” they say, “we have to report pass vs. fail ratios to our managers, because that’s what they want.” Yet bogus reporting is antithetical to the “real world”. Pass vs. fail ratios come from the the fake world, a world where numbers have magical properties to soothe troubled and uncertain souls. Still, there’s no question that managers want something. It’s our mandate to give them something of value.

Some people say that managers want numbers because they want to know that we’re measuring. I’ve found two ways of thinking about measurement that have been very useful to me. One is the definition from Kaner and Bond’s splendid paper “Software Engineering Metrics: What Do They Measure and How Do We Know?”: “Measurement is the empirical, objective assignment of numbers, according to a rule derived from a model or theory, to attributes of objects or events with the intent of describing them.” I think that’s a superb definition of quantitative measurement, and the paper includes a set of probing questions to test the validity of a quantitative measurement. Pass vs. fail ratios fall down badly when they’re subjected to those tests.

Jerry Weinberg offers another definition of measurement that I think is more in line with what managers really want: “Measurement is the art and science of making reliable (and significant) observations.” (The main part of the definition comes from Quality Software Management, Vol. 2: First-Order Measurement; the parenthetical comes from recent correspondence over Twitter.) That’s a more general, inclusive definition. It incorporates Kaner and Bond’s notion of quantitative measurement, but it’s more welcoming to qualitative, first-order approaches. First-order measurement, as Jerry describes it, provides answers to questions like “What seems to be happening? and What should I do now?” It entails a minimum of fuss, and tends to be direct, unobtrusive, inexpensive, and qualitative, leading either to immediate action or a decision to seek more information. It’s a common, misleading, and often expensive mistake in software development to leap over first-order measurement and reporting in favour of second-order—less direct, more quantified, more abstract, and based on more elaborate and vulnerable models.

My experience, as a tester, a programmer, a program manager, and a consultant, tells me that to manage a project well, you need a good deal of immediate and significant information. “Immediate” here doesn’t only mean timely; it also means unmediated, without a bunch of stuff getting in between you and the observation. In particular, managers need to know about problems that threaten the value of the product and the on-time, successful completion of the project. That knowledge requires more than abstract data; it requires information. So, as testers, how can we inform the decision-makers? In our Rapid Software Testing class, James Bach and I have lately taken to emphasizing this: We must learn to describe and report on the product, our testing, and the quality of our testing. This involves constructing, editing, narrating, and justifying a story in three lines that weave around each other like a braid. Each line, or level, is its own story.

Level 1: Tell the product story. The product story is a qualitative report on how the product can work, how it fails, and how it might fail in ways that matter to our clients. “Working”, “failure”, and “what matters” are all qualitative evaluations. Quality is value to some person; in a business setting, quality is value to some person who matters to the business. A qualitative report about a product requires us to relate the nature of the product, the people who matter, and the presence or absence of value, risks, and problems for those people. Qualitative information makes it possible for our clients to make informed decisions about quality.

Level 2: To make the product story credible, tell the testing story. The testing story is about how we configured, operated, observed, and evaluated the product; what we actually did and what we actually saw. The testing story gives warrant to the product story; it helps our clients understand why they should believe and trust the product story we’re giving. The testing story is centred around the coverage that we obtained and the oracles that we applied. Coverage is the extent to which we’ve tested the program; it’s about where we’ve looked and how we’ve looked, and it’s also about what’s uncovered—where we might not have looked yet, and where we don’t intend to look. Oracles are central to evaluation; they’re the principles and mechanisms that allow us to recognize a problem. The product story will likely feature problems in the product; the testing story, where necessary, includes an account of how we knew they were problems, for whom they would be problems, and inferences about how serious the problems it might be. We can make inferences about the significance of problems, but not ultimate conclusions, since the decision of what matters and what constitutes a problem lies with the product owner. The product story and our clients’ reactions to it will influence the ongoing testing story, and vice versa.

Level 3: To make the testing story credible, tell a story about the quality of the testing. Just as the product story needs warrant, so too does the testing story. To tell a story about the quality of testing requires us to describe why the testing we’ve done has been good enough, and why the testing we haven’t done hasn’t been so important so far. The quality-of-testing story includes details on what made testing harder or slower, what made the product more or less testable, what the risks and costs of testing are, and what we might need or recommend in order to provide better, more accurate, more timely information. The quality-of-testing story will shape and be shaped by the other two stories.

Develop skills to tell and frame stories. People sometimes justify presenting invalid numbers in lieu of stories by saying that numbers are “efficient”. I think they mean “fast”, since efficiency of communication depends not only on speed, but also on value, relevance, validity, and the level of detail your client needs. In order to frame stories appropriately and hit the right level of detail…

Don’t think data feed; think the daily news. Testing is like investigative journalism, researching and delivering stories to people. The newspaper business knows how to direct attention efficiently to the stories in which we’re interested, such that we get the level of detail that we seek. Some of those strategies include:

  • Headlines. A quick glance over each page tells us immediately what, in the editors’ judgement, are the most salient aspects of any given story. Headlines come in different sizes, relative to the editors’ assessment of the importance of the story.
  • Front page. The paper comes folded. The stories that the paper deems most important to its reader are on the front page, above the fold. Other important stories are on the front page below the fold. The page is laid out to direct our attention to what we find most relevant, and to allow us to focus and refocus on items of interest.
  • Continuation. When an entire story is too long to fit on the front page, it’s abbreviated and the story continues elsewhere. This gives the reader the option of following the story or looking at other items on the front page.
  • Coverage areas. The newspaper is organized into sections (hard news, business, sports, life and leisure, arts, real estate, cars, travel, and so forth). Each section comes with its own front page, which generally includes headlines and continuations of its own.
  • Structured storytelling. Newspaper stories tend to be organized in spiralling levels of detail, such that the story is set up to follow the inverted pyramid (the link is well worth reading). The story typically begins with the most newsworthy information, usually immediately addressing the five W questions—who, what where, why, and when, plus how—and the the story builds from there. The key is that the reader can absorb information to the level of detail she seeks, continuing to the end of the story or jumping out when she’s satisfied.
  • Identifying who is involved and who is affected. Reporters and editors contextualize their stories. Just as in testing, people are the most important element of the context. A story is far more compelling when it affects the reader or people that the reader cares about. A good story often helps to clarify why the reader should care.
  • Varying approaches to delivering information. Newspapers often use a picture to helps to illustrate or emphasize an important aspect of a story. In the business or sports sections, where quantitative data is often crucial, information may be organized in tables, or trends may be illustrated with charts. Notice that the stories—first-order reports—are always given greater prominence than the tables of stock quotes league standings, and line scores.
  • Sidebars. Some stories are illuminated by background information that might break the flow of the main story. That information is presented in parallel; in another thread, as we might say.
  • Daily (and in the world of the Web, continuous) delivery of information. My newspaper arrives at a regular time each day, a sort of daily heartbeat for the news cycle. The paper’s Web site is updated on a continuous basis. Information is available both on a supply and a demand basis; both when I expect it and when I seek it.
  • Identifiable sources. Well-researched stories gain credibility by identifying how, where, when, and from whom the information was obtained. This helps to set up degrees of trust and skepticism in the reader.

One important note: These approaches apply to more than text. Testers need to extend these patterns not only to written or mechanical forms, but to oral discourse.

I’ll have more suggestions and additional parallels between test reporting and newspapers in the next post in this series.

What Exploratory Testing Is Not (Part 2): After-Everything-Else Testing

Friday, December 16th, 2011

Exploratory testing is not “after-everything-else-is-done” testing. Exploratory testing can (and does) take place at any stage of testing or development.

Indeed, TDD (test-driven development) is a form of exploratory development. TDD happens in loops, in which the programmer develops a check, then develops the code to make the check pass (along with all of the previous checks), then fixes any problems that she has discovered, and then loops back to implementing a new bit of behaviour and inventing a new check. The information obtained from each loop feeds into the next; and the activity is guided and structured by the person or people involved in the moment, rather than in advance. The checks themselves are scripted, but the activity required to produce them and to analyze the results is not. Compared to the complex cognitive activity—exploratory, iterative—that’s going on as code is being developed, the checks themselves—scripted, linear—are trivial.

Requirement review is an exploratory activity too. Review of requirements (or specifications, or user stories, or examples) tends happens early on in a development cycle, whether it’s a long or a short cycle. While review might be guided by checklists, the people involved in the activity are making decisions on the fly as they go through loops of design, investigation, discovery, and learning. The outcome of each loop feeds back into the next activity, often immediately.

Code review can also be done in a scripted way or an exploratory way. When humans analyze the code, it’s an unscripted, self-directed activity that happens in loops; so it is exploratory. We call it review, but it’s gathering information with the intention of informing a decision; so it is testing. There is a way to review code that involves the application of scripted processes, via a tools that people generally call “static testing tools. When a machine parses code and produces a report, by definition it’s a form of checking, and it’s scripted. Yet using those tools productively requires a great deal of exploratory activity. Parsing and interpreting the report and responding to it is polimorphic, human action—unscripted, open-ended, iterative, and therefore exploratory.

Learning about a new product or a new feature is an exploratory activity if you want to do it or foster it well. Some suggest that test scripts provide a useful means of training testers. Research into learning shows that people tend to learn more quickly and more deeply when their learning is based on interaction and feedback; guided, perhaps, but not controlled. If you really want to learn about a product, try creating a mind map, documenting some aspect of the program’s behaviour, or creating plausible scenarios in which people might use—or misuse—the product. All of these activities promote learning, and they’re all exploratory activities. There’s far more information that you can use, apply, and discover than a script can tell you about. Come to think of it… where does the script come from?

Developing a test procedure—even developing a test script, whether for a machine or a human to follow, or developing the kind of “test” that skilled testers would call a demonstration—is an exploratory activity. There is no script that specifies how to write a new script for a particular purpose. Heard about a new feature and pondering how you might test it? You’ve already begun testing; you’re doing test design and you’re probably learning as you go. To the extent that you use the product or interact with it, bounce ideas off other people, or think critically about your design, you’re testing, and you’re doing it in an unscripted way. Some might suggest that certain tools create scripts that can perform automatic checks. Yet reviewing those checks for appropriateness, interpreting the results, and troubleshooting unexpected outcomes are all exploratory activities.

Supposing that a programmer, midway through a sprint, decides that she’d like some feedback on the work that she’s done so far on a new module. She hands you a bit of code to look at. You might interact with the code directly through a test tool that she provided, or (say) via the Ruby interpreter, or you might write some script code to exercise some of the functions in the module. In any event, you find some problems in it. In order to investigate a problem that you’ve discovered, you must explore. You must explore whether your recognition of the problem was triggered by your own interaction with the program or by a mechanically executed script. You’re in control of the activity; each new test around the problem feeds back into your choice of the next activity, and into the story that you’re going to tell about the product.

All of the larger activities that I’ve described above are exploratory, and they all happen before you have a completed function or story or sprint. Exploratory testing is not a stage or phase of testing to be performed after you’ve performed your other test techniques. Exploratory testing is not an “other” test technique, because it’s not a technique at all. Exploratory testing is not a thing that you do, but rather a way that you work (and think, and act), the hallmarks being who (or what) is in control, and the extent to which your activity is part of a loop, rather than a straight line. Any test technique can be applied in a scripted way or in an exploratory way. To those who say “we do exploratory testing after our acceptance tests are all running green”, I would suggest looking carefully and observing the extent to which you’re doing exploratory testing all the way along.

Next in the series: What Exploratory Testing Is Not (Part 3): Tool-Free Testing

And, of course, in the face of all these instances of what exploratory testing is not, you might want to know our current take on what exploratory testing is.

Should Testers Play Planning Poker?

Wednesday, October 26th, 2011

My colleague and friend Eric Jacobson, who recently (as I write) did a bang-up job on his first conference presentation at STAR West 2011, asks a question in response to this blog post from 2006. (I like it when people reflect on an issue for a few years.) Eric asks:

You are suggesting it may not make sense for testers to give time-based estimates to their teams, but what about relative estimates? Let’s say a Rapid Software Tester is asked to participate in Planning Poker (relative-based story estimation) on an Agile Scrum team. I’ve always considered this a golden opportunity. Are you suggesting said tester may want to refuse to participate in the Planning Poker?

Having observed Planning Poker in action, I’m conflicted. Estimating anything is always a bit of a dodgy business, even at the best of times. That’s especially true for investigation and in particular for discovery. (I’ve written about some of the problems with estimation here and in subsequent posts, and with how those problems pertain to testing here.) Yet Planning Poker may be one way to get a good deal closer to the best of times. I like the idea of testers hearing what’s going on in planning sessions, and of offering perspective on the possible implications of work or change. On the other hand, at Planning Poker sessions I’ve observed or participated in, testers are often pressured to lower their numbers. In an environment where there’s trust, there tends to be much less pressure; in an environment where there’s less trust, I’d take pressure to lower the estimate as a test result with several possible interpretations. (I leave those interpretations as an exercise for the reader, but don’t stop until you get to five, at least.)

In any case, some fundamental problems remain: First, testing is oriented towards discovering things, not building things. At the root of it all, any estimate of how long it will take to test something is like estimating how long it will take you to evaluate someone’s ability to speak Spanish (which I wrote about here), and discovering problems in their ability to express themselves. If you already know something or can reasonably anticipate it, that helps a lot, and the Planning Poker approach (among many others) can help with that to some degree.

The second problem is that there’s not necessarily symmetry between the effort in creating something and the effort in testing it. A function or feature that takes very little effort to program might take an enormous amount of effort to test. What kinds of variation could we put into data, workflow, timing, platform dependencies and interactions, scenarios, and so forth? Meanwhile, a feature that takes signficant amounts of programming effort could take almost no time to test (since “programming effort” could include an enormous amount of testing effort). There are dozens of factors involved, including the amount of testing the programmers do as they code; what kind of review is being done; what the scope of the change is; when particular discoveries get made (during “development time” or “testing time”; the skill of the parties involved; the testability of the product under test; how buggy the finished feature is (in which case there will be more time needed for investigation and reporting)… Planning Poker doesn’t solve the asymmetry problem, but it provides a venue for discussing it and getting started on sorting it out.

The third problem, closely related to the second, is this idea that all testing work associated with developing something must and shall happen within the same iteration. Testing never ends; it only stops. So it’s folly to think that all testing for a given amount of programming work can always fit into the same iteration in which the work is done. I’d argue that we need a more nuanced perspective and more options than that. The decision as to how much testing we’ll need is informed by many factors. Paradoxically, we’ll need some testing to help reveal and inform our notions of how much testing we’ll need.

I understand the desire to close the book on a development story within the sprint. I often—even usually—share that desire. Yet many kinds of testing work must respond to development work, and in such cases the development work has to be complete in some lesser sense than “fully tested”. Many kinds of confirmatory checking work, it seems to me, can be done within the same sprint as the programming work; no problem there. Yet it seems to me that other kinds of testing can reasonably wait for subsequent sprints—indeed, must wait for subsequent sprints, unless we’d like to have programmers stop all programming work altogether after a certain day in the sprint. Let me give you an example: in big banks, some kinds of transactions take several days to wend their way through batch processes that are run overnight. The testing work associated with that can be simulated, for sure (indeed, one would hope that most of such work would be simulated), but only at the expense of some loss of realism. For the test, whether the realism is important or not is always an open question with a fallible answer. Instead of making sure that there’s NO testing debt, consider reasonable, small, and sustainable amounts of testing debt that spans iterations. Agile can be about actual agility, instead of dogma.

So… If playing Planning Poker is part of the context, go for it. It’s a heuristic approach to getting people to consider testing more consciously and thoughtfully, and there’s something to that. It’s oriented towards estimating things in a more comprehensible time frame, and in digestible chunks of task and effort. Planning Poker is fallible, and one approach among many possible approaches. Like everything else, its usefulness largely depends mostly on the people using it, and how they use it.

Testing: Difficult or Time-Consuming?

Thursday, September 29th, 2011

In my recent blog post, Testing Problems Are Test Results, I noted a question that we might ask about people’s perceptions of testing itself:

Does someone perceive testing to be difficult or time-consuming? Who? What’s the basis for that perception? What assumptions underlie it?

The answer to that question may provide important clues to the way people think about testing, which in turn influences the cost and value of testing.

As an example, an pseudonymous person (“PM Hut”) who is evidently associated with project management in some sense (s/he provides the URL http://www.pmhut.com) answered my questions above.

Just to answer your question “Does someone perceive testing to be difficult or time-consuming?” Yes, everyone, I can’t think of a single team member I have managed who doesn’t think that testing is time consuming, and they’d rather do something else.

This, alas, isn’t an unusual response. To someone like me who offers help in increasing the value and reducing the cost of testing, it triggers some questions that might prompt reframes or further questions.

  • What do the team members think testing is? Do they think that it’s something ancillary to the project, rather than an essential and integrated aspect of software development? To me, testing is about gathering information and raising awareness that’s essential for identifying product risks and steering the project. That’s incredibly important and valuable.

    So when the team members are driving a car, do they perceive looking out the windshield to be difficult or time-consuming? Do they perceive looking at the dashboard to be difficult or time-consuming? If so, why? What are the differences between the way they obtain awareness when they’re driving a car, versus the way they obtain awareness when they’re contributing to the development of a product or service?

  • Do the team members think testing is the mindless repetition of actions and observation of specific outputs, as prescribed by someone else? If so, I’d agree with them that testing is an unpalatable activity—except I don’t call that testing. I call it checking, and I’d rather let a machine do it. I’d also ask if checking is being done automatically by the programmers at lower levels where it tends to be fast, cheap, easy, useful and timely—or manually at higher levels, where it tends to be slower, more expensive, more difficult, less useful, and less timely—and tedious?
  • Is testing focused mostly on confirmation of things that we already know or hope to be true? Is it mostly focused on the functional aspects of the program (which are amenable to checking)? People tend to find this dull and tedious, and rightly so. Or is testing an active search for new information, problems, and risks? Does it include focus on parafunctional aspects of the product—the things that provide important perceptions of real value to real people? Are the testers given the freedom and responsibility to manage a good deal of their own investigation? Testers tend to find this kind of approach a lot more engaging and a lot more interesting, and the results are typically more wide-ranging, informative, and valuable to programmers and managers.
  • Is testing overburdened by meaningless and valueless paperwork, bureaucracy, and administrivia? How did that come to pass? Are team members aware that there are simple, lightweight, rapid, and highly effective ways of planning, recording, and reporting testing work and project status?
  • Are there political issues? Are testers (or people acting temporarily in a testing role) routinely blown off (as in this example)? Are the nuggets of information revealed by testing habitually dismissed? Is that because testing is revealing trivial information? If so, is there a problem with specific testing skills like modeling the test space, determining coverage, determining oracles, recording, or reporting?
  • Have people been trained on the basis of testing as a skilled, sophisticated thinking art? Or is testing something for which capability can be assessed by a trivial, 40-question multiple choice exam?
  • If testing is being done well (which given people’s attitudes expressed above would be a surprise), are programmers or managers afraid of having to deal with the information that testing reveals? Does that lead to recrimination and conflict?
  • If there’s a perception that testing is by its nature dull and slow, are the testers aware of the quick testing approaches in our Rapid Software Testing class (PDF, page 97-99) , in the Black Box Software Testing course offered by the Association for Software Testing, or in James Whittaker’s How to Break Software? Has anyone read and absorbed Lessons Learned in Software Testing?
  • If there’s a perception that technical reviews are slow, have the testers, programmers, or managers read Perfect Software and Other Illusions About Testing? Do they recognize the ways in which careful observation provides us with “instant reviews” (see Perfect Software, page 143)? Has anyone on the team read any other of Jerry Weinberg’s books on software management and measurement?
  • Have the testers, programmers, and managers recognized the extent to which exploratory testing is going on all the time? Do they recognize that issues revealed by testing might be even more important than bugs? Do they understand that every test result and every testing problem points to meta-information that can be extremely valuable in managing the project?

On PM Hut’s own Web site, there’s an article entitled “Why Project Managers Fail“. The author, Jim Benson, lists five common problems, each of which could be quickly revealed by looking at testing as a source of information, rather than by simply going through the motions. Take it from the former program manager of a product that, in its day, was the best-selling piece of commercial software in the world: testers, testing, and the information they reveal are a project manager’s best friends and most valuable assets—when you have the awareness to recognize them.

Testing need not be difficult, tedious or time-consuming. A perception that it is so, or that it must be so, suggests a problem with testing as practised or testing as perceived. Astute managers and teams will investigate that important and largely mistaken perception.

Testing Problems Are Test Results

Tuesday, September 6th, 2011

I often do an exercise in the Rapid Software Testing class in which I ask people to catalog things that, for them, make testing harder or slower. Their lists fit a pattern I hear over and over from testers (you can see an example of the pattern in this recent question on Stack Exchange). Typical points include:

  • I’m a tester working alone with several programmers (or one of a handful of testers working with many programmers).
  • I’m under enormous time pressure. Builds are coming in continuously, and we’re organized on one- or two-week development cycles.
  • The product(s) I’m testing is (are) very complex.
  • There are many interdependencies between modules within the product, or between products.
  • I’m seeing a consistent pattern of failures specifically related to those interdependencies; the tiniest change here can have devastating impact there—or anywhere.
  • I believe that I have to run a complete regression test on every build to try to detect those failures.
  • I’m trying to cope by using automated checks, but the complexity makes the automation difficult, the program’s testing hooks are minimal at best, and frequent product changes make the whole relationship brittle.
  • The maintenance effort for the test automation is significant, at a cost to other testing I’d like to do.
  • I’m feeling overwhelmed by all this, but I’m trying to cope.

On top of that,

  • The organization in which I’m working calls itself Agile.
  • Other than the two-week iterations, we’re actually using at most two other practices associated with Agile development, (typically) daily scrums or Kanban boards.

Oh, and for extra points,

  • The builds that I’m getting are very unstable. The system falls over under the most basic of smoke tests. I have to do a lot of waiting or reconfiguring or both before I can even get started on the other stuff.

How might we consider these observations?

We could choose to interpret them as problems for testing, but we could think of them differently: as test results.

Test results don’t tell us whether something is good or bad, but they may inform a decision, or an evaluation, or more questions. People observe test results and decide whether there are problems, what the problems are, what further questions are warranted, and what decisions should be made. Doing that requires human judgement and wisdom, consideration of lots of factors, and a number of possible interpretations.

Just as for automated checks and other test results, it’s important to consider a variety of explanations and interpretations for testing meta-results—observations about testing. If we don’t do that, we risk missing important problems that threaten the quality of testing effort, and the quality of the product, too.

As Jerry Weinberg points out in Perfect Software and Other Illusions About Testing, whatever else something might be, it’s information. If testing is, as Jerry says, gathering information with the intention of informing a decision, it seems a mistake to leave potentially valuable observations lying around on the floor.

We often run into problems when we test. But instead of thinking of them as problems for testing, we could also choose to think of them as symptoms of product or project problems—problems that testing can help to solve.

For example, when a tester feels outnumbered by programmers, or when a tester feels under time pressure, that’s a test result. The feeling often comes from the programmers generating more work and more complexity than the tester can handle without help.

Complexity, like quality, is a relationship between some person and something else. Complexity on its own isn’t necessarily a problem, but the way people react to it might be. When we observe the ways in which people react to perceived complexity and risk, we might learn a lot.

  • Do we, as testers, help people to become conscious of the risks—especially the Black Swans—that typically accompany complexity?
  • If people are conscious of risk, are they paying attention to it? Are they panicking over it? Or are they ignoring it and whistling past the graveyard? Or…
  • Are people reacting calmly and pragmatically? Are they acknowledging and dealing with the complexity of the product?
  • If they can’t make the product or the process that it models less complex, are they at least taking steps to make that product or process easier to understand?
  • Might the programmers be generating or modifying code so quickly that they’re not taking the time to understand what’s really going on with it?
  • If someone feels that more testers are needed, what’s behind that feeling? (I took a stab at an answer to that question a few years back.)

How might we figure that out answers to those questions? One way might be to look at more of the test results and test meta-results.

  • Does someone perceive testing to be difficult or time-consuming? Who?
  • What’s the basis for that perception? What assumptions underlie it?
  • Does the need to investigate and report bugs overwhelm the testers’ capacity to obtain good test coverage? (I wrote about that problem here.)
  • Does testing consistently reveal consistent patterns of failure?
  • Are programmers consistently surprised by such failures and patterns?
  • Do small changes in the code cause problems that are disproportionately large or hard to find?
  • Do the programmers understand the product’s interdependencies clearly? Are those interdependencies necessary, or could they be eliminated?
  • Are programmers taking steps to anticipate or prevent problems related to interfaces and interactions?
  • If automated checks are difficult to develop and maintain, does that say something about the skill of the tester, the quality of the automation interfaces, or the scope of checks? Or about something else?
  • Do unstable builds get in the way of deeper testing?
  • Could we interpret “unstable builds” as a sign that the product has problems so numerous and serious that even shallow testing reveals them?
  • When a “stable” build appears after a long series of unstable builds, how stable is it really?

Perhaps, with the answers to those questions, we could raise even more questions.

  • What risks do those problems present for the success of the product, whether in the short term or the longer term?
  • When testing consistently reveals patterns of failures and attendant risk, what does the product team do with that information?
  • Are the programmers mandated to deliver code? Or are the programmers mandated to deliver code with a warrant that the code does what it should (and doesn’t do what it shouldn’t), to the best of their knowledge? Do the programmers adamantly prefer the latter mandate?
  • Is someone pressuring the programmers to make schedule or scope commitments that they can’t really fulfill?
  • Are the programmers and the testers empowered to push back on scope or schedule pressure when it adds to product or project risk?
  • Do the business people listen to the development team’s concerns? Are they aware of the risks that testers and programmers bring to their attention? When the development team points out risks, do managers and business people deal with them congruently?
  • Is the team working at a sustainable pace? Or is the product and the project being overwhelmed by complexity, interdependencies, fragility, and problems that lurk just beyond the reach of our development and testing effort?
  • Is the development team really Agile, in the sense of the precepts of the Agile Manifesto? Or is “agility” being used in a cargo-cult way, using practices or artifacts to mask over an incoherent project?

Testers often feel that their role is to find, investigate, and report on bugs in a running software product. That’s usually true, but it’s also a pretty limited view of what testers could test. A product can be anything that someone has produced: a program, a requirements document, a diagram, a specification, a flowchart, a prototype, a development process mode, a development process, an idea. Testing can reveal information about all of those things, if we pay attention.

When seen one way, the problems that appear at the top of this article look like serious problems for testing. They may be, but they’re more than that too. When we remember Jerry’s definition of testing as “gathering information with the intention of informing a decision”, then everything that we notice or discover during testing is a test result.

Here’s a follow-up to this post. (See also this discussion for an example of looking beyond the test result for possible product and project risks.)

This post was edited in small ways, for clarity, on 2017-03-11.