Blog Posts from December, 2008

Are Testers Still Needed on Agile Projects?

Wednesday, December 17th, 2008


Please don’t misunderstand. I love the principles in the Agile Manifesto (including Robert Martin’s proposal for the fifth, “Craftsmanship over crap.”) I have great enthusiasm and respect for those who have advocated the role of testing in Agile projects. But in order to push these ideas forward, let’s make sure to test our stuff, to make sure that it doesn’t accidentally bring discredit to the ideas.

What Should A Test Plan Contain?

Wednesday, December 3rd, 2008

In response to this posting, Clive asks, “So in your opinion what should a test plan contain?”

First, Clive, thank you for asking.

Let’s consider first what we might mean by “plan”. The way James Bach and I talk about planning (and the way we teach it the Rapid Software Testing course) is that a plan is the sum or intersection of strategy and logistics. Strategy is the set of ideas that guide your test design. Logistics is the set of ideas that guide your application of resources. Put those things together, and you have a plan. The most important thing to note about this is that a plan is not a physical thing; it’s a set of ideas. Thus, it’s important to keep clear the difference between the plan and the planning documents—that is, the documents that contain some information about the plan.

It’s possible to interpret your question in at least two ways—first, as a question about the plan, and second as a question about the planning document. Let’s start with the plan.

For ideas on strategy, I consult the Heuristic Test Strategy Model (download it and have a look). Heuristics are fallible methods for solving a problem or making a decision; “heuristic” as a adjective means “(fallibly) conducive to learning”. “Heuristic” here does triple duty, modifying “model” (all models are heuristic), “strategy” (all strategies are heuristic too), and “test” (all tests are heuristic). The HTSM is a set of guideword heuristics and associated questions; you can find it here. I’ve memorized the guidewords. They’re not hard to remember, using the mnemonics found in the course notes and a little practice.

Having the guidewords in my head makes it fast and easy to come up with lots of questions to ask, ideas to follow, and risks to evaluate in four broad categories.

  • Project environment—questions about context (customer, information sources, developer relations, test team, equipment and tools, schedule, test item (what we’re testing), and deliverables (or work products);
  • Product elements and their dimensions, important in evaluating coverage (structure, function, data, platforms, operations, and time);
  • Quality criteria—questions about the characteristics of the product that appeal to the users we like and discourage users we don’t like (capability, reliability, usability, security, scalability, performance, installability, compatibility, supportability, testability, maintainability, portability, and localizability); and
  • Test techniques by which we might interact with the product (function tests, domain test, stress tests, flow tests, scenario tests, claims-based tests, user tests, risk-based tests, and automatic tests).

For ideas on application of resources, I consider the Heuristic Test Planning Context Model (download this, too), which helps me think about who my clients are, the mission that I’m being asked to accomplish and the givens that I’ve got. Then I ask myself (and, if necessary, my client) questions about the things that might be missing from the givens. Based on risk, the value of the information we seek, and the cost of discovering it, we might decide to go with what we’ve got and exploit whatever resources we have, or we might decide to seek and apply more resources.

With these two tools, I have lots of ideas in my head, and the sum of those ideas constitutes the plan. The guidewords lend structure and stability to the plan; the ever-changing context and choices constantly focus and re-focus the plan. The planning ideas must diversified, risk-focused, specific to the product or system that we’re testing, and practical and achievable. So, in one sense, the plan contains all of these ideas.

Note that the plan itself is entirely thought-stuff, so in a literal physical sense the plan doesn’t actually contain anything. Planning documents contain representations—literally, re-presentations—of elements of the plan. Planning documents are highly audience- and purpose-dependent. They might include

  • a subset of the ideas that might be in my head;
  • mass storage for things that I might forget;
  • a means of co-ordinating the test team’s work;
  • ideas about risk, including ideas about knowns, known unknowns, and potential unknown unknowns;
  • a checklist of known problems in our application;
  • a general set of ideas about test coverage;
  • a specific set of ideas on co-ordinating risk ideas with tasks that we believe will help us to investigate and understand them;
  • notes on how we intend to assign people to certain tasks;
  • a detailed schedule for specific activities that we anticipate;
  • a very broad schedule for activities that we haven’t hammered out yet; or
  • anything else that might be useful to impart to some person in some context.

The audience for the document might include

  • me
  • my test team;
  • my manager;
  • someone else who might be advising me on strategy ideas;
  • her manager, and so forth up to the CEO;
  • my client’s clients;
  • other test teams (say, an outsource test team, or my client’s test team);
  • the programmers;
  • regulators or auditors;
  • or other people or agencies.

A test planning document might take the form of

  • a sketch on the back of a napkin or hotel stationery to help guide a conversation;
  • a few lines in a pocket Moleskine notebook;
  • several pages in a large Moleskine notebook;
  • an email message with some suggestions from some person to some tester;
  • a diagram on a whiteboard;
  • a mind map;
  • a hierarchical text outline;
  • a Gantt chart;
  • a specific set of test ideas or test cases in some ginormous requirements and test management tool;
  • a lengthy but rough Word document, supplemented with tables of data in Excel spreadsheets;
  • a highly formal, polished, formatted document for presentation to a customer or a regulatory agency;
  • or anything else you can think of that describes or models some part of your intentions.

In the Appendices to the Rapid Software Testing course, there’s an excellent set of examples of test planning documents, from the very sketchy to the broadly general to the highly specific. See pages 47 – 166. There’s also an example of a general procedure here. Note the comment on the page that links to it:

I produced this procedure for Microsoft to help them do a better job of assuring that applications that claim to be Windows 2000 compatible really are compatible. The procedure itself is documented in 6 pages. As far as I know it is the first published exploratory testing procedure. It’s used along with a second non-exploratory procedure (which is 400 pages long!) to perform the certification test. What’s interesting about that is the fact that my 6 pages represent about one third of the total test effort.

So what should a test planning document look like? The default answer, to me, is nothing, because the test plan is idea-stuff, and translating ideas into some other form costs something. However, there may be value associated communicating the idea to someone else, so the default answer is not the final answer. The final answer is “the very least expensive representation of the idea that can sufficiently store or communicate the idea.” That might be vague—especially that “sufficiently” business. Yet there are a few important things to remember.

  • Planning comes with opportunity cost attached. In general, the more time we spend planning, the less time we spend interacting with the product. This is not to advocate no planning, but rather the least amount of planning that nonetheless sufficiently guides the accomplishment of the mission.
  • Writing plans—translating thoughts into some other format—tends to take more time than planning—thinking those thoughts.
  • Writing things down, sketching them, outlining them, can be a productive form of working, learning, and remembering for many people. So writing things down usually has some value.
  • Writing things down tends to become more important when people are separated in distance, time, and knowledge and when the knowledge is valuable. Documents can be useful in transmitting specific and detailed information to others. Again, in this context, writing things down can have some value.
  • Writing things down may help people to learn something, or it may give people the illusion of having learned it. It may even prevent people from learning things on their own. (This is not a new problem; see Plato’s Phaedrus, and the dialog between the King of Egypt and the god Theuth).
  • Conversation (or an internal questioning process) may be more useful in a circumstance where things are not known, but must be explored.
  • Aspects of the two approaches can supplement each other.
  • One useful heuristic, which I learned from Cem Kaner, is to ask whether your test planning document is intended to be a product, generally designed to supply someone else with something that they want, or a tool, generally designed to supply ourselves with something that helps us to accomplish our own mission. These are not distinct categories, but thinking about them might help to set priorities.
  • New information prompts us to change our plans. As testers, discovering new information is arguably the most important part of our role. We should be careful to limit our investments in planning, in light of the fact that we’ll know more this afternoon than we did this morning—and, for any given cycle in time, we’ll tend to know significantly more at the end of this cycle than we do now.
  • Our context drives the choices we make, and both evolve over time.