Blog Posts for the ‘Measurement’ Category

Very Short Blog Posts (30): Checking and Measuring Quality

Monday, November 14th, 2016

This is an expansion of some recent tweets.

Do automated tests (in the RST namespace, checks) measure the quality of your product, as people sometimes suggest?

First, the check is automated; the test is not. You are performing a test, and you use a check—or many checks—inside the test. The machinery may press the buttons and return a bit, but that’s not the test. For it to be a test, you must prepare the check to cover some condition and alert you to a potential problem; and after the check, you must evaluate the outcome and learn something from it.

The check doesn’t measure. In the same way, a ruler doesn’t measure anything. The ruler doesn’t know about measuring. You measure, and the ruler provides a scale by which you measure. The Mars rovers do not explore. The Mars rovers don’t even know they’re on Mars. Humans explore, and the Mars rovers are ingeniously crafted tools that extend our capabilities to explore.

So the checks measure neither the quality of the product nor your understanding of it. You measure those things—and the checks are like tiny rulers. They’re tools by which you operate the product and compare specific facts about it to your understanding of it.

Peter Houghton, whom I greatly admire, prompted me to think about this issue. Thanks to him for the inspiration. Read his blog.

Very Short Blog Posts (29): Defective Detection Effectiveness

Tuesday, July 14th, 2015

Managers are responsible for hiring testers, for training them, and for removing any obstacles that make testing harder or slower. Managers are also responsible for hiring developers and designers, and providing appropriate training when it’s needed. If there are problems in development, managers are responsible for helping the developers to address them.

Managers are also responsible for the scope of the product, the budget, the staffing, and the schedule. As such, they’re responsible for maintaining awareness of the product, of product development, and anything that threatens the value of either of these. Finally, managers are responsible for the release decision: is this product ready for deployment or release into the market?

Misbegotten metrics like “Defect Detection Percentage” (I won’t dignify references to them with a link) continue to plague the software development world, and are sometimes used to evaluate “testing effectiveness”. But since it’s management’s job to understand the product and to decide when the product ships, a too-low defect detection percentage suggests the possibility of development or testing problems, unaware management, or a rash shipping decision. Testers don’t decide whether or when to ship the product; that’s management’s responsibility. In other words: Defect Detection Percentage—to the degree that it has any validity at all—measures management effectiveness.

Facts and Figures in Software Engineering Research (Part 2)

Wednesday, October 22nd, 2014

On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he shows data on a slide titled “U.S. Averages for Software Quality”.

US Averages for Software Quality 2002

(Source: http://bit.ly/1rj19Ol, accessed September 5, 2014)

It is not clear what “defect potentials” means. A slide preceding this one says defect potentials are (or include) “requirements errors, design errors, code errors, document errors, bad fix errors, test plan errors, and test case errors.”

There no description in the presentation of the link between these categories and the numbers in the “Defect Potential” column. Yes, the numbers are expressed in terms of “defects per function point”, but where did the numbers for these “potentials” come from?

In order to investigate this question, I spent just over a hundred dollars to purchase three books by Mr. Jones: Applied Software Measurement (Second Edition) (1997) [ASM2]; Applied Software Measurement: Asssuring Productivity and Quality (Third Edition), 2008 [ASM3]; and The Economics of Software Quality (co-authored with Olivier Bonsignour) (2011). In [ASM2], he says

The “defect potential” of an application is the sum of all defects found during development and out into the field when the application is used by clients and customers. The kinds of defects that comprise the defect potential include five categories:

  • Requirements defects
  • Design defects
  • Source code defects
  • User documentation defects
  • “Bad fixes” or secondary defects found in repairs in prior defects

The information in this book is derived from observations of software projects that utilized formal design and code inspections plus full multistage testing activities. Obviously the companies also had formal and accurate defect tracking tools available.

Shortly afterwards, Mr. Jones says:

Note that this kind of data is clearly biased, since very few companies actually track life-cycle defect rates with the kind of precision needed to ensure really good data on this subject.

That’s not surprising, and it’s not the only problem. What are the biases? How might they affect the data? Which companies were included, and which were not? Did each company have the same classification scheme for assigning defects to categories? How can this information be generalized to other companies and projects?

More importantly, what is a defect? When does a coding defect become a defect (when the programmer types a variable name in error?) and when might it suddenly stop becoming a defect (when the programmer hits the backspace key three seconds later?)? Does the defect get counted as a defect in that case?

What is the model or theory that associates the number 1.25 in the slide above with the potential for defects in design? The text suggests that “defect potentials” refers to defects found—but that’s not a potential, that’s an outcome.

In Applied Software Measurement, Third Edition, things change a little:

The term “defect potential” refers to the probable number of defects found in five sources: requirements, design, source code, user documents, and bad fixes… The data on defect potentials comes from companies that actually have lifecycle quality measures. Only a few leading companies have this kind of data, and they are among the top-ranked companies in overall quality: IBM, Motorola, AT&T, and the like.

Note the change: there’s been a shift from the number of defects found to the probable number of defects found. But surely defects were either found or they weren’t; how can they be “probably found”? Perhaps this is a projection of defects to be found—but what is the projection based on? The text does not make this clear. And the question has still been begged: What is the model or theory that associates the number 1.25 in the slide above with the potential for defects in design?

These are questions of construct validity, about which I’ve written before. And there are many questions that one could ask about the way the data has been gathered, controlled, aggregated, normalized, and validated. But there’s something more troubling at work here.

Here’s a similar slide from a presentation in 2005:
US Averages for Software Quality 2005

(Source: http://twin-spin.cs.umn.edu/sites/twin-spin.cs.umn.edu/files/SQA05l.pdf, accessed September 5, 2014)

From a presentation in 2008:
US Averages for Software Quality 2008

(Source: http://www.jasst.jp/archives/jasst08e/pdf/A1.pdf, accessed September 5, 2014)

From a presentation in 2010:
US Averages for Software Quality 2010

(Source: http://www.sqgne.org/presentations/2010-11/Jones-Nov-2010.pdf, accessed September 5, 2014)

From a presentation in 2012:
US Averages for Software Quality 2012

(Source: http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf, accessed September 5, 2014)

From a presentation in 2013:
US Averages for Software Quality 2013

(Source: http://namcookanalytics.com/wp-content/uploads/2013/10/SQA2013Long.pdf, accessed September 5, 2014)

And here’s one from all the way back in 2000:
US Averages for Software Quality 2000

(Source: http://www.ifpug.org/Conference%20Proceedings/IFPUG-2000/IFPUG2000-14-Jones-Function_Points_And_Software_Value.pdf, accessed October 22, 2014)

What explains the stubborn consistency, over 13 years, of every single data point in this table?

I thank Laurent Bossavit for his inspiration and assistance in exploring this data.

Facts and Figures in Software Engineering Research

Monday, October 20th, 2014

On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research, gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the sources for his data on the second slide:

SPR clients from 1984 through 2002
• About 600 companies (150 clients in Fortune 500 set)
• About 30 government/military groups
• About 12,000 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than a dozen lawsuits

(Source: http://bit.ly/ZDFKaT, accessed September 5, 2014)

On May 2, 2005, Mr. Jones, this time billed as Chief Scientist and Founder of Software Quality Research, gave a presentation called “SOFTWARE QUALITY IN 2005: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the source for his data, again on the second slide:

SPR clients from 1984 through 2005
• About 625 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 12,500 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://bit.ly/1vEJVAc, accessed September 5, 2014)

Notice that 34 months have passed between the two presentations, and that the “total projects number” has increased by 500. At 75 projects a month, we should expect that 2550 projects have been added to the original tally; yet only 500 projects have been added.

On January 30, 2008, Mr. Jones (Founder and Chief Scientist Emeritus of Software Quality Research), gave a presentation called “SOFTWARE QUALITY IN 2008: A SURVEY OF THE STATE OF THE ART”. This time the sources (once again on the second slide) looked like this:

SPR clients from 1984 through 2008
• About 650 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 12,500 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://www.jasst.jp/archives/jasst08e/pdf/A1.pdf, accessed September 5, 2014)

This is odd. 32 months have passed since the May 2005 presentation. With new data being added at 75 projects per month, there should have been 2400 projects new since the prior presentation. Yet there has been no increase at all in the number of total projects.

On November 2, 2010, Mr. Jones (now billed as Founder and Chief Scientist Emeritus and as President of Capers Jones & Associates LLC) gave a presention called “SOFTWARE QUALITY IN 2010: A SURVEY OF THE STATE OF THE ART”. Here are the sources, once again from the second slide:

Data collected from 1984 through 2010
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://www.sqgne.org/presentations/2010-11/Jones-Nov-2010.pdf, accessed September 5, 2014)

Here three claims about the data have changed: 25 companies have been added to the data sources, 13,500 projects comprises the total set, and “about 50-75 projects” have been added (or are being added; this isn’t clear) per month. 21 full months have passed since the January presentation (which came at the end of the month). To be fair, the claim of an increase of 1,000 projects almost fits the lower bound of the claimed number of per-month increases (which would be 1,050 new projects since the last presentation), but not the claim of 75 per month (1,575 new projects). What does it mean to claim “new data = about 50-75 projects per month”, when the new data appears to be coming in a rate below the lowest rate claimed?

On May 1, 2012, Mr. Jones (CTO of Namcook Analytics LLC) gave a talk called “SOFTWARE QUALITY IN 2012: A SURVEY OF THE STATE OF THE ART”. Once again, the second slide provides the sources.

Data collected from 1984 through 2012
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf, accessed September 5, 2014)

Here there has been no change at all in any of the previous claims (except for the range of time over which the data has been collected). The claim that 50-75 projects per month has been added remains. At that rate, extrapolating from the claims in the November 2010 presentation, there should be between 14,400 and 14,850 projects in the data set. Yet the claim of 13,500 total projects also remains.

On August 18, 2013, Mr. Jones (now VP and CTO of Namcook Analytics LLC) gave a presentation “SOFTWARE QUALITY IN 2013: A SURVEY OF THE STATE OF THE ART”. Here are the data sources (from page 2)

Data collected from 1984 through 2013
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://namcookanalytics.com/wp-content/uploads/2013/10/SQA2013Long.pdf, accessed September 5, 2014)

Once again, no change in the total number of projects, but the claim of 50-75 new projects remains. Again, based on the 2012 claim, 15 months in time passed (more like 16, but we’ll be generous here), and the growth claims in these presentations, there should be between 14,250 and 14,625 projects in the data set.

Based on the absolute claim of 75 new projects per month in the period 2002-2008, and 50 per month in the remainder, we’d expect 20,250 projects at a minimum by 2013. But let’s be conservative and generous, and base the claim of new projects per month at 50 for the entire period from 2002 to 2013. That would be 600 new projects per year over 11 years; 6,600 projects added to 2002’s 12,000 projects, for a total of 18,600 by 2013. Yet the total number of projects went up by only 1,500 over the 11-year period—less than one-quarter of what the “new data” claims would suggest.

In summary, we have two sets of figures in apparent conflict here. In each presentation,

1) the project data set is claimed to grow at a certain rate (50-75 per month, which amounts to 600-900 per year).
2) the reported number of projects grows at a completely different rate (on average, 136 per year).

What explains the inconsistency between the two sets of figures?

I thank Laurent Bossavit for his inspiration and help with this project.

Weighing the Evidence

Friday, September 12th, 2014

I’m going to tell you a true story.

Recently, in response to a few observations, I began to make a few changes in my diet and my habits. Perhaps you’ll be impressed.

  • I cut down radically on my consumption of sugar.
  • I cut down significantly on carbohydrates. (Very painful; I LOVE rice. I LOVE noodles.)
  • I started drinking less alcohol. (See above.)
  • I increased my intake of tea and water.
  • I’ve been reducing how much I eat during the day; some days I don’t eat at all until dinner. Other days I have breakfast, lunch, and dinner. And a snack.
  • I reflected on the idea of not eating during the day, thinking about Moslem friends who fast, and about Nassim Taleb’s ideas in Antifragile. I decided that some variation of this kind in a daily regimen is okay; even a good idea.
  • I started weighing myself regularly.

    Impressed yet? Let me give you some data.

    When I started, I reckon I was just under 169 lbs. (That’s 76.6 kilograms, for non-Americans and younger Canadians. I still use pounds. I’m old. Plus it’s easier to lose a pound than a kilogram, so I get a milestone-related ego boost more often.)

    Actually, that 169 figure is a bit of a guess. When I became curious about my weight, the handiest tool for measuring it was my hotel room’s bathroom scale. I kicked off my shoes, and then weighed myself. 173 lbs., less a correction for my clothes and the weight of all of the crap I habitually carry around in my pockets: Moleskine, iPhone, Swiss Army knife, wallet stuffed with receipts, pocket change (much of it from other countries). Sometimes a paperback.

    Eventually I replaced the batteries on our home scale (when did bathroom scales suddenly start needing batteries? Are there electronics in there? Is there software? Has it been tested?—but I digress). The scale implicitly claims a certain level of precision by giving readings to the tenth of a pound. These readings are reliable, I believe; that is, they’re consistent from one measurement to the next. I tested reliability by weighing myself several times over a five-minute period, and the results were consistent to the tenth of a pound. I repeated that test a day or two later. My weight was different, but I observed the same consistency.

    I’ve been making the measurement of my actual weight a little more precise by, uh, leaving the clothes out of the measurement. I’ve been losing between one and two pounds a week pretty consistently. A few days ago, I weighed myself, and I got a figure of 159.9 lbs. Under 160! Then I popped up for a day or two. This morning, I weighed myself again. 159.4! Bring on the sugar!

    That’s my true story. Now, being a tester, I’ve been musing about aspects of the measurement protocol.

    For example, being a bathroom scale, it’s naturally in the bathroom. The number I read from the scale can vary depending on whether I weigh myself Before or After, if you catch my meaning. If I’ve just drunk a half litre of water, that’s a whole pound to add to the variance. I’ve not been weighing myself at consistent times of the day, either. In fact, this afternoon I weighed myself again: 159.0! Aren’t you impressed!

    Despite my excitement, it would be kind of bogus for me to claim that I weigh 159.0 lbs, with the “point zero”. I would guess my weight fluctuates by at least a pound through the day. More formally, there’s natural variability in my weight, and to be perfectly honest, I haven’t measured that variability. If I were trying to impress you with my weight-loss achievement, I’d be disposed to report the lowest number on any given day. You’d be justified in being skeptical about my credibility, which would make me obliged to earn it if I care about you. So what could I do to make my report more credible?

    • I could weigh myself several times per day (say, morning, afternoon, and night) at regular times, average the results, and report the average. If I wanted to be credible, I’d tell you about my procedure. If I wanted to be very credible, I’d tell you about the variances in the readings. If I wanted to be super credible, I’d let you see my raw data, too.

      All that would be pretty expensive and disruptive, since I would have to spend few minutes going through a set procedure (no clothes, remember?) at very regular times, every day, whether I was at home or at a business lunch or travelling. Few hotel rooms provide scales, and even if they did, for consistency’s sake, I’d have to bring my own scale with me. Plus I’d have to record and organize and report the data credibly too. So…

    • Maybe I could weigh myself once a day. To get a credible reading, I’d weigh myself under very similar and very controlled conditions; say, each morning, just before my shower. This would be convenient and efficient, since doffing clothes is part of the shower procedure anyway. (I apologize for my consistent violation of the “no disturbing mental images” rule in this post.) I’d still have to bring my own scale with me on business trips to be sure I’m using consistent instrumentation.
    • Speaking of instrumentation, it would be a good idea for me to establish the reliability and validity of my scale. I’ve described its reliability above; it produces a consistent reading from one measurement to the next. Is it a valid reading, though? If I desired credibility, I’d calibrate the scale regularly by comparing its readings to a reference scale or reference weight that itself was known to be reliable (consistent between observations) and valid (consistent with some consensus-based agreement on what “a pound” is). If I wanted to be super-credible, I’d report whatever inaccuracy or variability I observed in the reading from my scale, and potential inconsistencies in my reference instruments, hoping that both were within an acceptable range of tolerance. I might also invite other people to scrutinize and critique my procedure.
    • If I wanted to be ultra-scientific, I’d also have to be prepared to explain my metric—the measurement function by which I hang a number on an observation. and the manner in which I operationalized the metric. The metric here is bound into the bathroom scale: for each unit pound placed on the scale, the figure display should increase by 1.0. We could test that as I did above. Or, more whimsically, if I were to put 159 one-pound weights on one side of Sir Bedevere’s largest scales, and me on the other, the scales would be in perfect balance (“and therefore… A WITCH!”), assuming no problems with the machinery.
    • If I missed any daily observations, that would be unfortunate and potentially misleading. Owning up to the omission and reporting it would probably preferable to covering it up. Covering up and getting caught would torpedo my credibility.
    • Based on some early samples, and occasional resampling, I could determine the variability of my own weight. When reporting, I could give a precise figure and along with the natural variation in the measurement: 159.4 lbs, +/- 1.2 lbs.
    • Unless I’m wasting away, you’d expect to see my weight stabilize after a while. Stabilize, but not freeze. Considering the natural variance in my weight, it would be weird and incredible if I were to report exactly the same weight week after week. In that case, you’d be justified to suspect that something was wrong. It could be a case of quixotic reliability—Kirk and Miller’s term for an observation that is consistent in a trivial and misleading way, as a broken thermometer might yield. Such observations, they say, frequently prove “only that the investigator has managed to observe or elicit ‘party line’ or rehearsed information. Americans, for example, reliably respond to the question ‘How are you?’ with the knee-jerk ‘Fine.” The reliability of this answer does not make it useful data about how Americans are.” Another possibility, of course, is that I’m reporting faked data.
    • It might be more reasonable to drop the precision while retaining accuracy. “About 160 lbs” is an accurate statement, even if it’s not a precise one. “About 160, give or take a pound or so” is accurate, with a little patina of precision and a reasonable and declared tolerance for imprecision.
    • Plus, I don’t think anyone else cares about a daily report anyhow. Even I am only really interested in things in the longer term. Having gone this far watching things closely, I can probably relax. One weighing a week, on a reasonably consistent day, first thing in the morning before the shower (I promise; that was the last time I’ll present that image) is probably fine. So I can relax the time and cost of the procedure, too.
    • I’m looking for progress over time to see the effects of the changes I’m made to my regimen. Saying “I weigh about 160. Six weeks ago, I weighed about 170” adds context to the report. I could provide the raw data:

      Plotting the data against time on a chart would illustrate the trend. I could show display the data in a way that showed impressive progress:

      But basing the Y-axis at 154.0 (to which Excel defaulted, in this case) wouldn’t be very credible because it exaggerates the significance of the change. To be credible, I’d use a zero base:

      Using a zero-based Y-axis on the chart would show the significance of change in a more neutral way.

    • To support the quantitative data, I might add other observations, too: I’ve run out of holes on my belt and my pants are slipping down. My wife has told me that I look trimmer. Given that, I could add add these observations to the long-term trend in the data, and could cautiously conclude that the regimen overall was having some effect.
    • All this is fine if I’m trying to find support for the hypothesis that my new regimen is having some effect. It’s not so good for two other things. First, it does not prove that my regimen change is having an effect. Maybe it’s having no effect at all, and I’ve been walking and biking more than before; or maybe I acquired some kind of wasting disease just as I began to cut down on the carbs. Second, it doesn’t identify specific factors that brought about weight loss and rule out other factors. To learn about those and to report on them credibly, I’d have to go back to a more refined approach. I would have to vary aspects of my diet while controlling others and make precise observations of what happened. I’d have to figure out what factors to vary, why they might be important, and what effects they might have. In other words, I’d be developing a hypothesis tied to a model and a body of theory. Then I’d set up experiments, systematically varying the inputs to see their effects, and searching for other factors that might influence the outcomes. I’d have to control for confounding factors outside of my diet. To make the experiment credible, I’d have to show that the numbers were focused on describing results, and not on attaining a goal. That’s the distinction between inquiry metrics and control metrics: an inquiry metric triggers questions; a control metric influences or drives decisions.

    When I focus on the number, I set up the possibility of some potentially harmful effects. To make the number look really good on any given day, I might cut my water intake. To make the number look fabulous over a prolonged period (say, as long as I was reporting my weight to you), I could simply starve myself until you stopped paying attention. Then it’d be back to lots of sugar in the coffee, and yes, I will have another beer, thank you.) I know that if I were to start exercising, I’d build up muscle mass, and muscle weighs more than flab. It becomes very tempting to optimize my weight in pounds, not only to impress you, but also to make me feel proud of myself. Worst of all: I might rig the system not consciously, but unconsciously. Controlling the number is reciprocal; the number ends up controlling me.

    Having gone through all of this, it might be a good idea to take a step back and line up the accuracy and precision of my measurement scheme with my goal—which I probably should have done in the first place. I don’t really care how much I weigh in pounds; that’s just a number. No one else should care how much I weigh every day. And come to think of it, even if they did care, it’s none of their damn business. The quantitative value of my weight is only a stand-in—a proxy or an indirect measurement—for my real goal. My real goal is to look and feel more sleek and trim. It’s not to weigh a certain number of pounds; it’s to get to a state where my so-called “friends” stop patting my belly and asking me when the baby is due. (You guys know who you are.)

    That goal doesn’t warrant a strict scientific approach, a well-defined system of observation, and precise reporting, because it doesn’t matter much except to me. Some data might illustrate or inform the story of my progress, but the evidence that matters is in the mirror; do I look and feel better than before?

    In a different context, you may want to persuade people in a professional discipline of some belief of some course of action, while claiming that you’re making solid arguments based on facts. If so, you have to marshal and present your facts in a way that stands up to scrutiny. So, over the next little while, I’ll raise some issues and discuss things that might be important for credible reporting in a professional community.


    This blog post was strongly influenced by several sources.

    Cem Kaner and Walter P. Bond, “Software Engineering Metrics: What Do They Measure and How Do We Know“. In particular, I used the ten questions on measurement validity from that paper as a checklist for my elaborate and rigourous measurement procedures above. If you’re a tester and you haven’t read the paper, my advice is to read it. If you have read it, read it again.

    Shadish, Cook, and Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Snappy title, eh? As books go, it’s quite expensive, too. But if you’re going to get serious about looking at measurement validity, it’s a worthwhile investment, extremely interesting and informative.

    Jerome Kirk and Mark L. Miller, Reliability and Validity in Qualitative Research. This very slim book raises lots of issues in performing, analyzing, and reporting if your aim is to do credible research. (Ultimately, all research, whether focused on quantitative data or not, serves a qualitative purpose: understanding the nature of things at least a little better.)

    Gerald M. (Jerry) Weinberg, Quality Software Management, Vol. 2: First Order Measurement, (also available as two e-books, “How to Observe Software” and “Responding to Significant Software Events”)

    Edward Tufte’s Presenting Data and Information (a mind-blowing one-day course) and his books The Visual Display of Quantitative Information; Envisioning Information; Visual Explanations; and Beautiful Evidence.

    Prior Art Dept.: As I was writing this post, I dimly recalled Brian Marick posting something on losing weight several years ago. I deliberately did not look at that post until I was finished with this one. From what I can see, that material (http://www.exampler.com/old-blog/2005/04/02/#big-visible-belly) was not related to this. On the other hand, I hope Brian continues to look and feel his best. 🙂

    I thank Laurent Bossavit and James Bach for their reviews of earlier drafts of this article.

Construct Validity

Tuesday, September 9th, 2014

A construct, in science, is (informally) a pattern or a means of categorizing something you’re talking about, especially when the thing you’re talking about is abstract.

Constructs are really important in both qualitative and quantitative research, because they allow us to differentiate between “one of these” and “not one of these”, which is one of the first steps in measurement and analysis. If you want to describe something or count it such that other people find you credible, you’ll need to describe the difference between “one” and “not-one” in a way that’s valid. (“Valid” here means that you’ve provided descriptions, explanations, or measurements for your categorization scheme while managing or ruling out alternatives, such that other people are prepared to accept your construct, and your definition can withstand challenges successfully.)

If you’re familiar with object-oriented programming, you might think of a construct as being like a class, in that objects have an “is a” relationship to a class. In an object-oriented program, things tend to be pretty tidy; an object is either a member of a certain class or it isn’t. For example, in Ruby, an object will respond to a query of the kind_of?() method with a binary true or false. In the world, not under the control of nice, neat models developed by programmers armed with digital computers, things are more messy.

Supposing that someone asks you to identify vehicles and pedestrians passing by a little booth that he’s set up. It seems pretty obvious that you’d count cars and trucks without asking him for clarification. However, what about bicycles? Tricycles? A motor scooter? An electric motor scooter? If a unicyclist goes by, do we count him? A skateboarder? A pickup truck towing a wagon with two ATVs in it? A recreational vehicles towing a car? An ATV? A tractor, pulling a wagon? A diesel truck pulling a trailer? How do you count a tow-truck, towing another vehicle, with the other vehicle’s driver riding in the tow truck? As one vehicle or two? A bus? A car transporter—a truck with nine vehicles on it? Who cares, you ask?

Well, the booth is at the entrance to a ferry boat, and the fee is $60 per vehicle, $5 per passenger, and $10 for pedestrians. Lots of people (especially those self-righteous cyclists)(relax; I’m one of them too) will gripe if they’re charged sixty bucks. Yet where I live, a bicycle is considered a vehicle under the Highway Traffic Act, which would suit the ferry owner who wants to maximize the haul of cash. He’d like especially like to see $600 from the car transporter. So in regular life, categorization schemes count, and the method for determining what fits into what category counts too.


How many vehicles?

If the problem is tricky for physical things—widgets—it’s super-tricky for abstractions in science that pertains to humans. You’ve decided to study the effect of a new medicine, and you want to try it out on healthy people to check for possible side effects. What is a healthy person? Health is an abstraction; a construct. If someone is in terrific shape but happens to have a cold today, does that person count as healthy? Over the last few summers, I’ve met a kid who’s a friend of a friend. He’s fit, strong, capable, active… and he does kidney dialysis ever couple of days or so. Healthy? A transplant patient who is in great shape, but who needs a daily dose of anti-rejection drugs: healthy?

If your country gives extra points to potential immigrants who are bilingual (as mine does), what level of fluency constitutes competence in a language to the degree that you can decide, “bilingual or not”? Note that I’m not referring to a test of whether someone is bilingual or not; I’m talking about the criteria that we’re going to test for; our sorting rules. Economists talk about “the economy” growing; what constitutes “the economy”? People speak of “events”; when airplanes hit the World Trade Center, was that one event or two? Who cares? Property owners and insurance companies cared very deeply indeed.

Construct validity is important in the “hard” physical sciences. “Temperature” is a construct. “To discuss the validity of a thermometer reading, a physical theory is necessary. The theory must posit not only that mercury expands linearly with temperature, but that water in fact boils at 100°. With such a theory, a thermometer that reads 82° when the water breaks into a boil can be reckoned inaccurate. Yet if the theory asserts that water boils at different temperatures under different ambient pressures, the same measurement may be valid under different circumstances — say at one half an atmosphere.” (Kirk and Miller, Reliability and Validity in Qualitative Research) Atmosopheric pressure varies from day to day, from hour to hour. So what is the temperature outside your window right now? The “correct” answer is surprisingly hard to decide.

In the “soft” social sciences and qualitative research, the measurement problem is even harder. Kirk and Miller go on, “In the case of qualitative observations, the issue of validity is not a matter of methodological hairsplitting about the fifth decimal point, but a question of whether the researcher sees what he or she thinks he or she sees.” (Kirk and Miller, Reliability and Validity in Qualitative Research)

When we come to the field of software development, there are certain constructs that people bandy about as though they were widgets, instead of idea-stuff: requirements; defects; test cases; tests; fixes; discoveries. What is a “programmer”? What is a “tester”? Is a programmer who spends a couple of days writing a test framework a programmer or a tester? Questions like these raise problems for anyone who wants a quantitative answer to the question, “How many testers per developer?” Kaner, Hendrickson, and Smith-Brock go into extensive detail on the subject. I’ve written about what counts before, too.

There’s a terrible difficulty in our craft: those who seem most eager to measure things seem not to pay very much attention to the problem of construct validity, as Cem Kaner and Walter P. Bond point out in this landmark paper, “Software Engineering Metrics: What Do They Measure and How Do We Know”). (I’m usually loath to say “All testers should do X”, but I think anyone serious about measurement in software development should read this paper. It’s not hard. Do it now. I’ll wait.)

If you’re doing research into software development, how do you define, describe, and justify your notion of “defects” such that you count all the things that are defects, and leave out all the things that aren’t defects, and such that your readers agree? If you’re getting reports and aggregating data from the field, how do you make sure that other people are counting the same way as you are? Does “defect” have the same meaning in a game development shop as it does for the makers of avionics software? If you’re attempting to prove something in a quantitative, rigourous and scientific way, how do you answer objections when you say something is a defect and someone else says it isn’t? How do you respond when someone wants to say that “there’s more to defects than coding errors”?

Those questions will become very important in the days to come. Stay tuned.

For extra reading: See Shadish, Cook, and Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference. This book is unusually expensive, but well worth it if you’re serious about measurement and validity.

Very Short Blog Posts (19): Testing By Percentages

Sunday, May 4th, 2014

Every now and then, in some forum or another, someone says something like “75% of the testing done on an Agile project is done by automation”.

Whatever else might be wrong with that statement, it’s a very strange way to describe a complex, cognitive process of learning about a product through experimentation, and seeking to find problems that threaten the value of the product, the project, or the business. Perhaps the percentage comes from quantifying testing by counting test cases, but that’s at least as feeble as quantifying programming by counting lines of code; more so, probably, as James Bach and Aaron Hodder point out in “Test Cases Are Not Testing: Toward a Culture of Test Performance”.

But let me put this in an even simpler way: If someone said “management in an Agile project is 40% manual and 60% automated” (because managers spend 60% of their time in front of their computers), most of us would consider that as reflecting a very peculiar model of what it means to manage a project. If some said that programming in an Agile project is “30% manual and 70% automated” (because most of the work of programming, that business of translating human instructions into machine language, is done by the compiler), we’d shake our heads over that person’s confusion about what it means to do programming.

Why don’t people have the same reaction when it comes to testing?

Counting the Wagons

Monday, December 30th, 2013

A member of Linked In asks if “a test case can have multiple scenarios”. The question and the comments (now unreachable via the original link) reinforce, for me, just how unhelpful the notion of the “test case” is.

Since I was a tiny kid, I’ve watched trains go by—waiting at level crossings, dashing to the window of my Grade Three classroom, or being dragged by my mother’s grandchildren to the balcony of her apartment, perched above a major train line that goes right through the centre of Toronto. I’ve always counted the cars (or wagons, to save us some confusion later on). As a kid, it was fun to see how long the train was (were more than a hundred wagons?!). As a parent, it was a way to get the kids to practice counting while waiting for the train to pass and the crossing gates to lift.

train

Often the wagons are flatbeds, loaded with shipping containers or the trailers from trucks. Others are enclosed, but when I look through the screening, they seem to be carrying other vehicles—automobiles or pickup trucks. Some of the wagons are traditional boxcars. Other wagons are designed to carry liquids or gases, or grain, or gravel. Sometimes I imagine that I could learn something about the economy or the transportation business if I knew what the trains were actually carrying. But in reality, after I’ve counted them, I don’t know anything significant about the contents or their value. I know a number, but I don’t know the story. That’s important when a single car could have explosive implications, as in another memory from my youth.

A test case is like a railway wagon. It’s a container for other things, some of which have important implications and some of which don’t, some of which may be valuable, and some of which may be other containers. Like railway wagons, the contents—the cargo, and not the containers—are the really interesting and important parts. And like railway wagons, you can’t tell much about the contents without more information. Indeed, most of the time, you can’t tell from the outside whether you’re looking at something full, empty, or in between; something valuable or nothing at all; something ordinary and mundane, or something complex, expensive, or explosive. You can surely count the wagons—a kid can do that—but what do you know about the train and what it’s carrying?

To me, a test case is “a question that someone would like to ask (and presumably answer) about a program”. There’s nothing wrong with using “test case” as shorthand for the expression in quotes. We risk trouble, though, when we start to forget some important things.

  • Apparently simple questions may contain or infer multiple, complex, context-dependent questions.
  • Questions may have more outcomes than binary, yes-or-no, pass-or-fail, green-or-red answers. Simple questions can lead to complex answers with complex implications—not just a bit, but a story.
  • Both questions and answers can have multiple interpretations.
  • Different people will value different questions and answers in different ways.
  • For any given question, there may be many different ways to obtain an answer.
  • Answers can have multiple nuances and explanations.
  • Given a set of possible answers, many people will choose to provide a pleasant answer over an unpleasant one, especially when someone is under pressure.
  • The number of questions (or answers) we have tells us nothing about their relevance or value.
  • Most importantly: excellent testing of a product means asking questions that prompt discovery, rather than answering questions that confirm what we believe or hope.

Testing is an investigation in which we learn about the product we’ve got, so that our clients can make decisions about whether it’s the product they want. Other investigative disciplines don’t model things in terms of “cases”. Newspaper reporters don’t frame their questions in terms of “story cases”. Historians don’t write “history cases”. Even the most reductionist scientists talk about experiments, not “experiment cases”.

Why the fascination with modeling testing in terms of test cases? I suspect it’s because people have a hard time describing testing work qualitatively, as the complex cognitive activity that it is. These are often people whose minds are blown when we try to establish a distinction between testing and checking. Treating testing in terms of test cases, piecework, units of production, simplifies things for those who are disinclined to confront the complexity, and who prefer to think of testing as checking at the end of an assembly line, rather than as an ongoing, adaptive investigation. Test cases are easy to count, which in turn makes it easy to express testing work in a quantitative way. But as with trains, fixating on the containers doesn’t tell you anything about what’s in them, or about anything else that might be going on.


As an alternative to thinking in terms of test cases, try thinking in terms of coverage. Here are links to some further reading:

  • Got You Covered: Excellent testing starts by questioning the mission. So, the first step when we are seeking to evaluate or enhance the quality of our test coverage is to determine for whom we’re determining coverage, and why.
  • Cover or Discover: Excellent testing isn’t just about covering the “map”—it’s also about exploring the territory, which is the process by which we discover things that the map doesn’t cover.
  • A Map By Any Other Name: A mapping illustrates a relationship between two things. In testing, a map might look like a road map, but it might also look like a list, a chart, a table, or a pile of stories. We can use any of these to help us think about test coverage.
  • What Counts“, an article that I wrote for Better Software magazine, on problems with counting things.
  • Braiding the Stories” and “Delivering the News“, two blog posts on describing testing qualitatively.
  • My colleague James Bach has a presentation on the case against test cases.
  • Apropos of the reference to “scenarios” in the original thread, Cem Kaner has at least two valuable discussions of scenario testing, as tutorial notes and as an article.

Where Does All That Time Go?

Tuesday, October 30th, 2012

It had been a long day, so a few of the fellows from the class agreed to meet a restaurant downtown. The main courses had been cleared off the table, some beer had been delivered, and we were waiting for dessert. Pedro (not his real name) was complaining, again, about how much time he had to spend doing administrivial tasks—meetings, filling out forms, time sheets, requisitions, and the like. “Everything takes so long. I want a pad of paper to take notes, I have to fill out a form for it. God help me if I run out of forms!”

“How much time do you spend on this kind of stuff each week?” I asked.

Pedro replied, “An hour a day. Maybe two, some days. Meetings…let’s say an hour and a half, on average.”

Wow, I thought—that’s a pretty good chunk of the week. I had an idea.

“Let’s visualize this, I said.” I took out my trusty Moleskine notebook. I prefer the version with the graph paper in it, for occasions just like this one. I outlined a grid, 20 squares across by two down.

Empty Week

“So you spend, on average, an hour and a half each day on compliance stuff. One-point-five times five, or 7.5 hours a week. Let’s make it eight. Put a C in eight squares.” He did that.

Compliance

“Okay,” I said. “You were griping today about how much time you spend wrestling with your test environments.”

Pedro’s eyes lit up. “Yes!” he said. “That’s the big one. See, it’s mobile stuff. We have a server component and a handset component to what we do, and the server stuff is a real bear.”

“Tell me more.”

“It’s a big deal. We’ve got one environment that models the production system. The software we’re developing has been so buggy that we can’t tell whether a given problem is general, or specific to the handset, so we have another one that we set up to do targeted testing every time we add support for a new handset. That’s the one I work with. Trouble is, setting it up takes ages and it’s really finicky. I have to do everything really carefully. I’ve asked for time to do scripting to automate some of it, but they won’t give that to me, because they’re always in such a rush. So, I do it by hand. It’s buggy, and I make the odd mistake. Either way, when I find out it doesn’t work, I have to troubleshoot it. That means I have to get on instant messaging or the phone to the developers, and figure out what’s wrong; then I have to figure out where to roll back to. And usually that’s right from the start. It wastes hours. And it’s every day.”

“Okay. Show me that on our little table, here. Use an S to represent each hour your spend each day.”

Whereupon Pedro proceded to fill in squares. Ten of them. Ten more. And then, eight more.

Setup

“Really?!” I said. “28 hours a week divided by five days—that’s more than five hours a day. Seriously?”

“Totally,” said Pedro. “It’s most of the day, every day, honestly. Never mind the tedium. What’s really killing me is that I don’t feel like I’m getting any real testing work done.”

“No kidding. There’s no time for it. There are only four squares left in the week. Plus, something you said earlier today about tons of bugs that aren’t related to setting up?”

“Right. When it comes to the stuff that I’m actually being asked to test, there’s lots of bugs there too. So my ‘testing time’ isn’t really testing. It’s mostly taken up with trying to reproduce and document the bugs.”

“Yes. In session-based test management, that’s bug investigation and reporting—B-time. And it does interrupt test design and execution—T-time—which is what produces actual test coverage, learning about what’s actually going on in the product. So, how much B-time?” He filled in three of the squares with Bs.

Bug Investigation and Reporting

“And T-time?”

He had room left to put in one lonely little T in the lower right corner.

Testing Time

“Wow,” I laughed. “One-fortieth of your whole week is spent in getting actual test coverage. The rest is all overhead. Have you told them how it affects you?”

“I’ve mentioned it,” he said.

“So look at this,” I suggested. “It’s even more clear when we use colour for emphasis.”

With Colour

“Whoa. I never looked at it that way. And then,” he paused. “Then they ask me, ‘Why didn’t you find that bug?'”

“Well,” I said, “considering the illusion they’re probably working under, it’s not an unreasonable question.”

“What do you mean?” Pedro asked.

“What does it say on your business card?”

“‘Software Testing’.”

“And what does it say on the door of the the test lab?”

“‘Test Lab’,” said Pedro.

“And they call you…?”

“Pedro.”

“No,” I laughed. “They say you’re a… what?”

“Oh. A tester.”

“So since you’re a tester, and since the door on the test lab says ‘Test Lab’, and your business card says ‘Testing’, they figure that’s all you do. The illusion is what Jerry Weinberg calls the Lumping Problem. All of those different activities—administrative compliance, setup, bug investigation and reporting, and test design and execution—are lumped into a single idea for them.” And I drew it for him.

Management's Dream

“That’s management’s illusion, there. Since, in their imagination, you’ve got forty hours of testing time in a week, it’s not unreasonable for them to wonder why you didn’t find that bug.”

“Hmmm. Right,” said Pedro.

“When in fact, what they’re getting from you is this.” And I drew it for him.

Testing Reality

“For testing—actual interaction with the product, looking for problems, you’ve got one-fortieth of the time they think you’ve got. One lonely little T. Is that part of your test report?”

“Oy,” he said. “Maybe I should show them something like this.”

“Maybe you should,” I said.

A couple of nights later, I showed that page of my notebook to James Bach over Skype. “Wow,” he said. “That guy could be forty times more productive!”

“Forty?”

“Well, no, not really, of course. But suppose the programmers checked their work a little more carefully, or suppose the testers practiced writing more concise bug reports and sharpened their investigating skill. One of those two things could cut the bug investigation time by a third. That would give more time for testing, when they’re not being interrupted by other stuff. What if they cut the setup time by a half, and that administrivia by half?”

“Four, fourteen…” I said. “That would give eighteen more hours for testing and bug investigation, for a total of 22 hours. And even if they’re still doing two hours of bug investigation for every one hour of testing time… well, that’s seven times more productive, at least.”

“Seven times the test coverage if they get some of those issues worked out, then,” said James.

“Maybe de-lumping is the kind of thing lots of testers would want to do in their test reports,” I said.

How about you?

Why Pass vs. Fail Rates Are Unethical (Test Reporting Part 1)

Thursday, February 23rd, 2012

Calculating a ratio of passing tests to failing tests is a relatively easy task. If it is used as a means of estimating the state of a development project, though, the ratio is invalid, irrelevant, and misleading. At best, if everyone ignores it entirely, it’s simply playing with numbers. Otherwise, producing a pass/fail ratio is irresponsible, unethical, and unprofessional.

A passing test is no guarantee that the product is working correctly or reliably. Instead, a passing test is an observation that the program appeared to work correctly, under some set of conditions that we were conscious of (and many that we weren’t), using a selection of specific inputs (and not using the rest of an essentially infinite set), at some time (to which we will never return), on some machine (that was in a particular state at that time; we observed and understood only a fraction of that state), based on a handful of things that we were looking at (and a boatload of things that we weren’t looking at, not that we’d have any idea where or how to look for everything). At best, a passing test is a rumour of success. Take any of the parameters above, change one bit, and we could have had a failing test instead.

Meanwhile, a failing test is no guarantee of a failure in the product we’re testing. Someone may have misunderstood a requirement, and turned that misunderstanding into an inappropriate test procedure. Someone may have understood the requirement comprehensively, and erred in establishing the test procedure; someone else may have erred in following it. The platform on which we’re testing may be misconfigured, or there may be something wrong with something on the system, such that our failing test points to that problem and is not an indicator of a problem in our product. If the test was being assisted by automation, perhaps there was a bug in the automation. Our test tools may be misconfigured such that they’re not doing what we think they’re doing. When generating data, we may have misclassified invalid data as valid, or vice versa, and not noticed it. We may have inadvertently entered the wrong data. The timing of the test may be off, such that system was not ready for the input we provided. There may be an as-yet-not-understood reason why the product is providing a result which seems incorrect to us, but which is in fact correct. A failing test is an allegation of failure.

When we do the math based on these assumptions, the unit of measurement in which pass/fail rates are expressed is rumours over allegations. Is this a credible unit of measurement?

Neither rumours nor allegations are things. Uncertainties are not units with a valid natural scale against which they can be measured. One entity that we call a “test case”, whether passing or failing, may consist of a single operation, observation and decision rule. Another entity called “test case” may consist of hundreds or thousands or millions of operations, all invisible, with thousands of opportunities for a tester to observe problems based not only on explicit knowledge, but also on tacit knowledge. Measuring while failing to account for clear differences between entities demolishes the construct validity of the measurement. Treating test cases—whether passing or failing—as though they were countable objects is a classic case of the reification fallacy. Aggregating scale-free, reified (non-)entities loses information about each instance, and loses information about any relationships between them. Some number of rumours doesn’t tell us anything about the meaning, significance, or value of any given passing tests, nor does the aggregate tell us anything about coverage that the passing tests provide, nor does the number tell us about missing coverage. Some number of allegations of which we’re aware doesn’t tell us anything about the seriousness of those allegations, nor does it tell us about undiscovered allegations. Dividing one invalid number by another invalid doesn’t mean the invalidity cancels and produces a valid ratio.

When the student has got an answer wrong, and the student is misinformed, there’s a problem. What does the number of questions that the teacher asked have to do with it? When a manager interviews a candidate for a job, and halfway through the interview he suddenly starts shouting obscenities at her, will the number of questions the manager asked have to do anything to do with her hiring decision? If the battery on the Tesla Roadster is ever completely drained, the car turns into a brick with a $40,000 bill attached to it. Does anyone, anywhere, care about the number of passing tests that were done on the car?

If we are asked to produce pass/fail ratios, I would argue that it’s our professional responsibility to politely refuse to do it, and to explain why: we should not be offering our clients the service of self-deception and illusion, nor should our client accept those services. The ratio of passing test cases to failing test cases is at best irrelevant, and more often a systemic means of self- and organizational deception. Reducing the product story to a number means reducing its relationship with people to a number. By extension, that means reducing people to numbers too. So to irresponsible, unethical, and unprofessional, we can add unscientific and inhumane.

So what’s the alternative? We’ll get to that tomorrow.