A few months ago, James Bach introduced me to the idea of test framing. He identified it as a testing skill, and did some work in developing the concept by field-testing it with some of his online students. We’ve been refining it lately. I’ll be giving a brief talk on it at the Kitchener-Waterloo Software Quality Association on Thursday, September 30, 2010, and I’ll be leading a half-day workshop on it at EuroSTAR. Here’s our first public cut at a description.
The basic idea is this: in any given testing situation
- You have a testing mission (a search for information, and your mission may change over time).
- You have information about requirements (some of that information is explicit, some implicit; and it will likely change over time).
- You have risks that inform the mission (and awareness of those risks will change over time).
- You have ideas about what would provide value in the product, and what would threaten it (and you’ll refine those ideas as you go).
- You have a context in which you’re working (and that context will change over time).
- You have oracles that will allow you to recognize a problem (and you’ll discover other oracles as you go).
- You have models of the product that you intend to cover (and you’ll extend those models as you go).
- You have test techniques that you may apply (and choices about which ones you use, and how you apply them).
- You have lab procedures that you follow (that you may wish to follow more strictly, or relax).
- You configure, operate, and observe the product (using test techniques, as mentioned above), and you evaluate the product (by comparing it to the oracles mentioned above, in relation to the value of the product and threats to that value).
- You have skills and heuristics that you may apply.
- You have issues related to the cost versus the value of your activities that you must assess.
- You have time (which may be severely limited) in which to perform your tests.
- You have tests that you (may) perform (out of an infinite selection of possible tests that you could perform).
Test framing involves the capacity to follow and express a direct line of logic that connects the mission to the tests. Along the way, the line of logical reasoning will typically touch on elements between the top and the bottom of the list above. The goal of framing the test is to be able to answer questions like
The form of the framing is a line of propositions and logical connectives that relate the test to the mission. A proposition is a statement that expresses a concept that can be true or false. We could think of these as affirmative declarations or assumptions. Connectives are word or phrases (“and”, “not”, “if”, “therefore”, “and so”, “unless”, “because”, “since”, “on the other hand”, “but maybe”, and so forth) that link or relate propositions to each other, generating new propositions by inference. This is not a strictly formal system, but one that is heuristically and reasonably well structured. Here’s a fairly straightforward example:
GIVEN: (The Mission:) Find problems that might threaten the value of the product, such as program misbehaviour or data loss.
Proposition: There’s an input field here.
Proposition: Upon the user pressing Enter, the input field sends data to a buffer.
Proposition: Unconstrained input may overflow a buffer.
Proposition: Buffers that overflow clobber data or program code.
Proposition: Clobbered data can result in data loss.
Proposition: Clobbered program code can result in observable misbehaviour.
Connecting the propositions: IF this input field is unconstrained, AND IF it consequently overflows a buffer, THEREFORE there’s a risk of data loss OR program misbehaviour.
Proposition: The larger the data set that is sent to this input field, the greater the chance of clobbering program code or data.
Connection: THEREFORE, the larger the data set, the better chance of triggering an observable problem.
Connection: IF I put an extremely long string into this field, I’ll be more likely to observe the problem.
Conclusion: (Test:) THEREFORE I will try to paste an extremely long string in this input field AND look for signs of mischief such as garbage in records that I observed as intact before, or memory leaks, or crashes, or other odd behaviour.
Now, to some, this might sound quite straightforward and, well, logical. However, in our experience, some testers have surprising difficulty with tracing the path from mission down to the test, or from the test back up to mission—or with expressing the line of reasoning immediately and cogently.
Our approach, so far, is to give testers something to test and a mission. We might ask them to describe a test that they might choose to run; and to have them describe their reasoning. As an alternative, we might ask them why they chose to run a particular test, and to explain that choice in terms of tracing a logical path back to the mission.
If you have an unframed test, try framing it. You should be able to do that for most of your tests, but if you can’t frame a given test right away, it might be okay. Why? Because as we test, we not only apply information; we also reveal it. Therefore, we think it’s usually a good idea to alternate between focusing and defocusing approaches. After you’ve been testing very systematically using well-framed tests, mix in some tests that you can’t immediately or completely justify. One of the possible justifications for an unframed test is that we’re always dealing with hidden frames. Revealing hidden or unknown frames is a motivation behind randomized high-volume automated tests, or stress tests, or galumphing, or any other test that might (but not certainly) reveal a startling result. The fact that you’re startled provides a reason, in retrospect, to have performed the test. So, you might justify unframed tests in terms of plausible outcomes or surprises, rather than known theories of error. You might encounter a “predicatable” problem, or one more surprising to you. In that case, better that you should say “Who knew?!” than a customer.
To test is to tell two parallel stories: a story of the product, and the story of our testing. James and I believe that test framing is a key skill that helps us to compose, edit, narrate, and justify the story of our testing in a logical, coherent, and rapid way. Expect to hear more about test framing, and please join us (or, if you like, argue with us) as we develop the idea.
See http://www.developsense.com/resources/TestFraming.pdf for current updates.