Blog Posts from May, 2014

Scenarios Ain’t Just Use Cases

Thursday, May 15th, 2014

How do people use a software product? Some development groups model use through use cases. Typically use cases are expressed in terms of the user performing a set of step-by-step behaviours: 1, then 2, then 3, then 4, then 5. In those groups, testers may create test cases that map directly onto the use cases. Sometimes, that gets called a scenario, and the testing of it is called a scenario test.

According to Cem Kaner, a scenario is a “hypothetical story, used to help a person think through a complex problem or system.” He also says that a scenario test has several characteristics: it is motivating, in that stakeholders would push to fix problems that the test revealed; credible, in that it not only could happen, but that things like it could probably happen; that it involves complexity in terms of use, environments, or data. (Read his paper on scenario testing here.)

Taking the steps directly from a use case and then calling it a scenario limits your view of what a scenario is, which in turn limits your testing. People do not do 1, 2, 3, 4, and 5 in real life. Instead, they

  • do 1
  • start 2
  • respond to one email, and delete a bunch of get-rich-quick offers
  • resume 2
  • take a phone call from the dog grooming studio; Fluffy will be ready at 4:30
  • realize they’ve lost track of what they were doing in 2
  • go back to 1
  • restart 2
  • look up some figures in Excel
  • place a pizza order for the lunchtime meeting
  • finish 2
  • go to 3
  • accidentally paste the pizza order into some field in 3
  • dismiss the error message, after a fruitless attempt to decipher what it means it
  • finish 3
  • forget to save their work; thank heaven for the auto-save feature
  • get called to an all-hands meeting
  • return to find that the machine has entered sleep mode
  • wake up the machine
  • dismiss a dialog saying that it’s now safe to remove the device, even though they didn’t want to remove the device; the message is due to an operating-system bug related to sleep mode
  • discuss rumours raised from the all-hands meeting on Instant Messaging
  • start 4
  • realize they got something wrong in step 2
  • go back through 3 to 2
  • go to lunch
  • wake up the damned machine again
  • dismiss the damned dialog again
  • correct 2
  • go forward to 3
  • accept the values that were left there from (auto-)saving the first time through (but which due to the changes in 2 are now invalid)
  • go into 4
  • get confused about an element of the user interface in 4
  • realize something looks wrong because of the inappropriately saved value from 3
  • go back to 3
  • fix the values and save the page
  • go to 4
  • look away from the computer, notice there’s a new plant in the corner, under the picture—when did that get there, anyway?
  • complete 4
  • start 5
  • get invited for coffee
  • come back
  • wake up the damned machine again
  • dismiss the damned dialog again
  • worry irrationally that they didn’t complete 4
  • open the app in a second window to confirm that they have in fact completed 4, inadvertently jostling 4’s state
  • restart 5
  • take a phone call in the middle of trying to do 5; “Fluffy appears to be sick and could you show up half an hour earlier?”
  • change their minds about something in 4
  • go back and fix it
  • get tapped on the shoulder by the boss
  • start 5
  • almost finish 5
  • forget to save their work
  • program crashes; thank heaven for the auto-save feature
  • find out that auto-save mode doesn’t actually save every time.

If you want to show that the system can work, by all means check the system by following the procedure that the use case prescribes. That’s something we call sympathetic testing, and it’s a reasonable place to start testing; to learn about the feature; to find how people might derive value from the feature; to begin building your models of the product, and how there might be problems in it.

But if you want to discover problems that matter to people before those people find them, test the system by introducing lots of variation, pauses, distractions, concurrent actions, and galumphing.

Related post: Why We Do Scenario Testing

Very Short Blog Posts (19): Testing By Percentages

Sunday, May 4th, 2014

Every now and then, in some forum or another, someone says something like “75% of the testing done on an Agile project is done by automation”.

Whatever else might be wrong with that statement, it’s a very strange way to describe a complex, cognitive process of learning about a product through experimentation, and seeking to find problems that threaten the value of the product, the project, or the business. Perhaps the percentage comes from quantifying testing by counting test cases, but that’s at least as feeble as quantifying programming by counting lines of code; more so, probably, as James Bach and Aaron Hodder point out in “Test Cases Are Not Testing: Toward a Culture of Test Performance”.

But let me put this in an even simpler way: If someone said “management in an Agile project is 40% manual and 60% automated” (because managers spend 60% of their time in front of their computers), most of us would consider that as reflecting a very peculiar model of what it means to manage a project. If some said that programming in an Agile project is “30% manual and 70% automated” (because most of the work of programming, that business of translating human instructions into machine language, is done by the compiler), we’d shake our heads over that person’s confusion about what it means to do programming.

Why don’t people have the same reaction when it comes to testing?

Very Short Blog Posts (18): Ask for Testability

Saturday, May 3rd, 2014

Whether you’re working in an Agile environment or not, one of the tester’s most important tasks is to ask and advocate for things that make a product more testable. Where to start? Think about visibility—in its simplest form, log files—and controllability in the form of scriptable application programming interfaces (APIs).

Logs aren’t just for troubleshooting. Comprehensive log files can help to identify the data that was processed and the functions that were covered during testing. Logs can be parsed to gather statistics or processed with visualization tools to reveal interesting patterns of behaviour. Ask for consistent structure, precise time stamps, and configurable levels of logging.

A scriptable API affords the opportunity for testers to drive the program at high speed or high volume, in well-ordered, variable, or randomized sequences. A scripting interface can allow testers to observe the program’s data structures, query its internal states, or adjust its configuration quickly and easily. Use APIs and tools for more than functional checking; use them for sophisticated, automation-assisted exploration. As a bonus, an API can add to the value of your product by making its functions more accessible to your customers.

You can’t depend on getting log files and APIs without asking for them. So, starting with your current sprint, ask early and ask often.

Related posts: