Quick testing is another approach to testing that can be done in a scripted way or an exploratory way. A tester using a highly exploratory approach is likely to perform many quick tests, and quick tests are often key elements in an exploratory approach. Nonetheless, quick testing and exploratory testing aren’t the same.
Quick tests are inexpensive tests that require little time or effort to prepare or perform. They may not even require a great deal of knowledge about the application being tested or its business domain, but they can help to develop new knowledge rapidly. Rather than emphasizing comprehensiveness or integrity, quick tests are designed to reveal information in a hurry at a minimal cost.
A quick test can be a great way to learn about the product, or to identify areas of risk, fragility, or confusion. A tester can almost always sneak a quick test or two into other testing activity. A burst of quick tests can help as the first activities during a smoke or sanity test. Cycles of relatively unplanned, informal quick tests may help to you discover or refine a more comprehensive or formal plan.
James Bach and I provide examples of many kinds of quick tests in the Rapid Software Testing class. You’ll notice that some of them are called tours. Note that not all tours are quick, and not all quick tests are tours. Here’s a catalog.
Perform a task, from start to finish, that an end-user might be expected to do. Use the product in the most simple, expected, straightforward way, just as the most optimistic programmer or designer might imagine users to behave. Look for anything that might confuse, delay, or irritate a reasonable person. Cem Kaner sometimes calls this “sympathetic testing”. Lean towards learning about the product, rather than finding bugs. If you do see obvious problems, it may be bad news for the product.
Tour a product looking for anything that is variable and vary it. Vary it as far as possible, in every dimension possible. If you’re using quick tests for learning, seek and catalog the important variables. Look for potential relationships between them. Identifying and exploring variations is part of the basic structure of our testing when we first encounter a product.
Sample Data Tour
Employ any sample data you can, and all that you can. For one kind of quick tests, prefer simple values whose effects are easy to see or calculate. For a different kind of quick test, choose complex or extreme data sets. Observe the units or formats in which data can be entered, and try changing them. Challenge the assumption that the programmers have thought to reject or massage inappropriate data. Once you’ve got a handle on your ideas about reasonable or mildly challenging data, you might choose to try…
Input Constraint Attack
Discover sources of input and attempt to violate constraints on that input. Try some samples of pathological data: use zeroes where large numbers are expected; use negative numbers where positive numbers are expected; use huge numbers where modestly-sized ones are expected; use letters in every place that’s supposed to handle only numbers, and vice versa. Use a geometrically expanding string in a field. Keep doubling its length until the product crashes. Use characters that are in some way distinct from your idea of “normal” or “expected”. Inject noise of any kind into a system and see what happens. Use Satisfice’s PerlClip utility to create strings of arbitrary length and content; use PerlClip’s counterstring feature to create a string that tells you its own length so that you can see where an application cuts off input.
People tend to talk a lot about input constraint attacks. Perhaps that’s because input constraint attacks are used by hackers to compromise systems; perhaps it’s because input constraint attacks can be performed relatively straightforwardly; perhaps it’s because they can be described relatively easily; perhaps it’s because input constraint attacks can produce dramatic and highly unexpected results. Yet they’re by no means the only kind of quick test, and they’re certainly not the only way to test using an exploratory approach.
Look in the online help or user manual and find some instructions about how to perform some interesting activity. Do those actions explicitly. Then improvise from them and experiment. If your product has a tutorial, follow it. You may expose a problem in the product or in the documentation; either way, you’ve found an inconsistency that is potentially important. Even if you don’t expose a problem, you’ll still be learning about the product.
Have a look at the folder where the program’s .EXE file is found. Check out the directory structure, including subs. Look for READMEs, help files, log files, installation scripts, .cfg, .ini, .rc files.
Look at the names of .DLLs, and extrapolate on the functions that they might contain or the ways in which their absence might undermine the application. Use whatever supplemental material you’ve got to guide or focus your actions. Another way to gather information for this kind of test: use tools to monitor the installation, and take the output from the tool as a point of departure.
Tour a product looking for the most complex features, the most challenging data sets, and the biggest interdepencies. Look for hidden nooks and crannies, but also look for the program’s high-traffic areas, busy markets, big office buildings, and train stations—places where there’s lots of interactions, and where bugs might be blending in with the crowd.
Menu, Window, and Dialog Tour
Tour a product looking for all the menus (main and context menus), menu items, windows, toolbars, icons, and other controls. Walk through them. Try them. Catalog them, or construct a mind map.
Keyboard and Mouse Tour
Tour a product looking for all the things you can do with a keyboard and mouse. Hit all of the keys on the keyboard. Hit all the F-keys; hit Enter, Tab, Escape, Backspace; run through the alphabet in order, and combine each key with Shift, Ctrl, Alt, the Windows key, CMD or Option, on other platforms, the AltGr key in Europe. Click (right, left, both, double, triple) on everything. Combine clicks with shifted keys.
Start activities and stop them in the middle. Stop them at awkward times. Perform stoppages using cancel buttons, O/S level interrupts (ctrl-alt-delete or task manager). Arrange for other programs to interrupt (such as screensavers or virus checkers). Also, try suspending an activity and returning later. Put your laptop into sleep or hibernation mode.
Start using a function when the system is in an appropriate state, then change the state part way through.
Delete a file while it is being edited; eject a disk; pull net cables or power cords) to get the machine an inappropriate state. This is similar to interruption, except you are expecting the function to interrupt itself by detecting that it no longer can proceed safely.
Set some parameter to a certain value, then, at any later time, reset that value to something else without resetting or recreating the containing document or data structure. Programmers often expect settings or variables to be adjusted through the GUI. Hackers and tinkerers expect to find other ways.
Whatever you’re doing, do more of it and do other stuff as well while you’re at it. Get more processes going at once; try to establish more states existing concurrently. Invoke nested dialog boxes and non-modal dialogs. On multi-user systems, get more people using the system or simulate that with tools. If your test seems to trigger odd behaviour, pile on in the same place until the odd becomes crazy.
While testing, do not reset the system. Leave windows and files open; let disk and memory usage mount.
You’re hoping to show that the system loses track of things or ties itself in knots over time.
Discover where individual functions interact or share data. Look for any interdependencies. Explore them, exploit them, and stress them out. Look for places where the program repeats itself or allows you to do the same thing in different places. For example, for data to be displayed in different ways and in different places, and seek inconsistencies. For example, load up all the fields in a form to their maximums and then traverse to the report generator.
Bring up the context-sensitive help feature during some operation or activity. Does the product’s help file explain things in a useful way, or does it offend the user’s intelligence by simply restating what’s already on the screen? Is help even available at all?
Ever notice how a cat or a kid can crash a system with ease? Testing is more than “banging on the keyboard”, but that phrase wasn’t coined for nothing. Try banging on the keyboard. Try clicking everywhere. Poke every square centimeter of every screen until you find a secret button.
Use auto-repeat on the keyboard for a very cheap stress test. Look for dialog boxes constructed such that pressing a key leads to, say, another dialog box (perhaps an error message) that also has a button connected to the same key that returns to the first dialog box. Place a shoe on the keyboard and walk away. Let the test run for an hour. If there’s a resource or memory leak, this kind of test could expose it. Note that some lightweight automation can provide you with a virtual shoe.
Find an aspect of the product that produces huge amounts of data or does some operation very quickly. Look through a long log file or browse database records, deliberately scrolling too quickly to see in detail. Notice trends in line lengths, or the look or shape of the data. Use Excel’s conditional formatting feature to highlight interesting distinctions between cells of data. Soften your focus. If you have a test lab with banks of monitors, scan them or stroll by them; patterns of misbehaviour can be surprisingly prominent and easy to spot.
Error Message Hangover
Programmers are rewarded for implementing features. There’s a psychological problem with errors or exceptions: the label itself suggests that something has gone wrong. People often actively avoid thinking about problems or mistakes, and as a consequence, programmers sometimes handle errors poorly. Make error messages happen and test hard after they are dismissed. Watch for subtle changes in behaviour between error and normal paths. With automation, make the same error conditions appear thousands of times.
Progressively lower memory, disk space, display resolution, and other resources. Keep starving the product until it collapses, or gracefully (we hope) degrades.
Run a lot of instances of the app at the same time. Open, use, update, and save the same files. Manipulate them from different windows. Watch for competition for resources and settings.
Modify the operating system’s configuration in non-standard or non-default ways, either before or after installing the product. Turn on “high contrast” accessibility mode, or change the localization defaults. Change the letter of the system hard drive. Put things in non-default directories. Use RegEdit (for registry entries) or a text editor (for initialization files) to corrupt your program’s settings in a way that should trigger an error message, recovery, or an appropriate default behavior.
Again: quick tests tend to be highly exploratory, but they represent only one kind of exploratory testing. Don’t be fooled into believing that quick testing—or certain kinds of quick testing—is all there is to exploratory testing.