Blog Posts from July, 2014

The Sock Puppets of Formal Testing

Monday, July 21st, 2014

Formal testing is testing that must be done in a specific way, or to check specific facts. In the Rapid Software Testing methodology, we map the formality of testing on a continuum. Sometimes it’s important to do testing in a formal way, and sometimes it’s not so important.

Formality Continuum

From Rapid Software Testing. See http://www.satisfice.com/rst.pdf

People sometimes tell me that they must test their software using a particular formal approach—for example, they say that they must create heavily documented and highly prescriptive test cases. When I ask them why, they claim it’s because of “standards” or “regulations”. For example, Americans often refer to Public Law 107-204, also known as the Sarbanes-Oxley Act of 2002, or SOX. I ask if they’ve read the Act. “Well… no.”

If you’re citing Sarbanes-Oxley compliance as a reason for producing test scripts, I worry that someone is beating you—or you’re beating yourself, perhaps—with a rumour. Sarbanes-Oxley says nothing about software testing. In fact, it says nothing about software. It does not say that you have to do software testing in any particular way. It certainly does not talk about test cases or test scripts. It does talk about testing, but only of “the auditor’s testing of the internal control structure and procedures of the issuer (of reports required by the SEC —MB)”. In this context, the word “testing” could easily be replaced by “scrutiny” or “evaluation” or “assessment”. Don’t believe me? Read it: https://www.sec.gov/about/laws/soa2002.pdf. Search for the words “test cases”, or “test scripts”, or even “test”. What do you find?

All testing comes with several kinds of cost, including development cost, maintenance cost, execution cost, transfer cost, and—perhaps most importantly—opportunity cost. (Opportunity cost means, basically, that we won’t be able to do that potentially valuable thing because we’re doing this potentially valuable thing.) Formality may provide a certain kind of value, but we must remain aware of its cost. Every moment that we spend on representing our test ideas as documentation, or on preparing to test in a specific way, or on encoding a testing activity, or on writing about something that we’ve done is a moment in which we’re not interacting directly with the product. More formality can be valuable if it’s helping us learn, or if it’s helping us structure our work, or if it helps us to tell the testing story—all such that the value of the activity meets its cost. The key question to ask, it seems to me, is “is it helping?” If it’s not helping, is it necessary?—and what is the basis of the claim that it’s necessary? Asking such questions helps to underscore that many things we “have to” do are not obligatory, but choices.

Formal testing is often said to involve comparing the product to the requirements for it, based on the unspoken assumption that “requirements” means “documented requirements”. Certainly some requirements can be understood and documented before we begin testing the product. Other requirements are tacit, rather than explicit. (As a trivial example, I’ve never seen a requirements document that notes that the product must be installed on a working computer.) Still other requirements are not well-understood before development begins; the actual requirements are discovered, revealed, and refined as we develop the product and our understanding of the problems that the product is intended to solve. As my colleague James Bach has said, “Requirements are ideas at the intersection between what we want, what we can have and how we might decide that we got it. In complex systems that requires testing. Testing is how we learn difficult things.” That learning happens in tangled loops of development, experimentation, investigation, and decision-making. Our ideas may or may not be documented formally—or at all—at any specific point in time. Indeed, some things are so important that we never document them; we have to know them.

So it goes with testing. Some aspects of our testing will be clear from the outset of the project; other aspects of our testing will be discovered, invented, and refined as we develop and test the product. Excellent formal testing, as James says, must therefore rooted in excellent informal testing. And who agrees with that? The Food and Drug Administration, for one, in the Agency’s “Design Considerations for Pivotal Clinical Investigations for Medical Devices“.

Few (in my experience, very few) of those people who insist on highly formal testing to comply with standards or regulations—or even with requirements documents—have read the relevant documents or examined them closely. In such cases, the justification for the formality is based on rumours and on going through the motions; the formality is premature, or excessively expensive, or unnecessary, or a distraction; and (in)consistency with the documentation and (non-)compliance with the regulations becomes accidental. At its worst, the testing falls into what James calls “pathetic compliance”, following “the rules” so we don’t “get in trouble”—a pre-school level of motivated behaviour.

As responsible testers, it is our obligation to understand what we’re testing, why we’re testing it, and why we’re testing it in a particular way. If we’re responsible for testing to comply with a particular standard, we must learn that standard, and that means studying it, understanding it, and relating it to our product and to our testing work. That may sound like a lot of work, and often it is. Yet our choice is to do that work, or to run the risk of letting our testing be structured and dictated to us by a sock puppet. Or a SOX puppet.

Sock Puppet

Photo credit: iStockPhoto

How Models Change

Saturday, July 19th, 2014

Like software products, models change as we test them, gain experience with them, find bugs in them, realize that features are missing. We see opportunities for improving them, and revise them.

A product coverage outline, in Rapid Testing parlance, is an artifact (a map, or list, or table…) that identifies the dimensions or elements of a product. It’s a kind of inventory of aspects of the product that could be tested. Many years ago, my colleague and co-author James Bach wrote an article on product elements, identifying Structure, Function, Data, Platform, and Operations (SFDPO; think “San Francisco DePOt”, he suggested) as a set of heuristic guidewords for creating or structuring or reviewing the highest levels of a coverage outline.

A few years later, I was working as a tester. While I was on that assignment, I missed a few test ideas and almost missed a few bugs that I might have noticed earlier had I thought of “Time” as another guideword for modeling the product. After some discussion, I persuaded James that Time was a worthy addition to the Product Elements list. I wrote my own article on that, Time for New Test Ideas).

Over the years, it seemed that people were excited by the idea of using SFDPOT as the starting point for a general coverage outline. Many people reported getting a lot of value out of it, so in my classes, I’ve placed more and more emphasis on using and practicing the application of that part of the Heuristic Test Strategy Model. One of the exercises involves creating a mind map for a real software product. I typically offer that one way to get started on creating a coverage outline is to walk through the user interface and enumerate each element of the UI in the mind map.

(Sometimes people ask, “Why bother? Don’t the specifications or the documentation or the Help file provide maps of the UI? What’s the point of making another one?” One answer is that the journey, rather than the map, is the point. We learn one set of things by reading about a product; we learn different things—and we typically learn more deeply—by touring the product, interacting with it, gaining experience with it, and organizing descriptions of what we’ve found. Moreover, at each moment, we may notice, infer, or wonder about things that the documentation doesn’t address. When we recognize something new, we can add it to our coverage model, our risk list, or our test ideas—plus we might recognize and note some bugs or issues along the way. Another answer is that we should treat anything that any documentation says about a product as a rumour until we’ve engaged with the product.)

One issue kept coming up in class: on the product coverage outline, where should the map of the user interface go? Under Functions (what the product does)? Or Operations (how people use the product)? Or Structure (the bits and pieces of the product)? My answer was that it doesn’t matter much where you put things on your coverage outline, as long as it fits for you and the people with whom you might be sharing the map. The idea is to identify things that could be tested, and not to miss important stuff.

After one class, I was on the phone with James, and I happened to mention that day’s discussion. “I prefer to put the UI under Structure,” I noted.

What? That’s crazy talk! The UI goes under Functions!”

“What?” I replied. “That’s crazy talk. The UI isn’t Functions. Sure, it triggers functions. But it doesn’t perform those functions.”

“So what?” asked James. “If it’s how the user gets at functions, it fits under Functions just fine. What makes you think the UI goes under Structure?”

“Well, the UI has a structure. It’s… structural.”

Everything has a structure,” said James. “The UI goes under Functions.”

And so we argued on. Then one of us—and I honestly don’t remember who—suggested that maybe the UI was important enough to be its own top-level product element. I do remember James pointing out that if when we think of interfaces, plural, there might be several of them—not just the graphical user interface, but maybe a command-line interface. An application programming interface.

“Hmmm…,” I said. This reminded me of the four-user model mentioned in How to Break Software (human user, API user, operating system user, file system user). “Interfaces,” I said. “Operating system interface, file system interface, network interface, printer interface, debugging interface, other devices…”

“Right,” said James. “Plus there are those other interface-y things—importing and exporting stuff, for instance.”

“Aren’t those covered under ‘Functions’?”

“Sure. Or they might be, depending on how you think about it. But the point of this kind of model isn’t to be a template, or a form you fill out. It’s to help us reduce the chances that we might miss something important. Our models are leaky abstractions; overlaps are okay,” said James. Which, of course, was exactly the same argument I had used on him several years earlier when we had added Time to the model. Then he paused. “Ah! But we don’t want to break the mnemonic, do we? San Francisco DePOT.”

“We can deal with that. Just misspell ‘depot’ San Francisco DIPOT. SFDIPOT.”

And so we updated the model.

I wonder what it will look like five years from now.

Very Short Blog Posts (20): More About Testability

Monday, July 14th, 2014

A few weeks ago, I posted a Very Short Blog Post on the bare-bones basics of testability. Today, I saw a very good post from Adam Knight talking about telling the testability story. Adam focused, as I did, on intrinsic testability—things in the product itself that it more testable. But testability isn’t just a product attribute. In Heuristics of Testability (material we developed in a session of Rapid Software Testing Intensive Online), James Bach shows that testability is a set of relationships between product (“intrinsic testability”); project (“project-related testability”); tester (“subjective testability”); what we want from the product (“value-related testability”); and how we know what we know and what we need to know (“epistemic testability”).

Be sure of this: anything that makes testing harder or slower gives bugs more time or more opportunities to hide. In telling an expert and compelling story of our testing, it’s essential to identify and address things that make it harder to understand the product we’ve got—things that help to increase the risk that it won’t be the product our clients want.