DevelopsenseLogo

Facts and Figures in Software Engineering Research

On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research, gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the sources for his data on the second slide:

SPR clients from 1984 through 2002
• About 600 companies (150 clients in Fortune 500 set)
• About 30 government/military groups
• About 12,000 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than a dozen lawsuits

(Source: http://bit.ly/ZDFKaT, accessed September 5, 2014)

On May 2, 2005, Mr. Jones, this time billed as Chief Scientist and Founder of Software Quality Research, gave a presentation called “SOFTWARE QUALITY IN 2005: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the source for his data, again on the second slide:

SPR clients from 1984 through 2005
• About 625 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 12,500 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://bit.ly/1vEJVAc, accessed September 5, 2014)

Notice that 34 months have passed between the two presentations, and that the “total projects number” has increased by 500. At 75 projects a month, we should expect that 2550 projects have been added to the original tally; yet only 500 projects have been added.

On January 30, 2008, Mr. Jones (Founder and Chief Scientist Emeritus of Software Quality Research), gave a presentation called “SOFTWARE QUALITY IN 2008: A SURVEY OF THE STATE OF THE ART”. This time the sources (once again on the second slide) looked like this:

SPR clients from 1984 through 2008
• About 650 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 12,500 total projects
• New data = about 75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://www.jasst.jp/archives/jasst08e/pdf/A1.pdf, accessed September 5, 2014)

This is odd. 32 months have passed since the May 2005 presentation. With new data being added at 75 projects per month, there should have been 2400 projects new since the prior presentation. Yet there has been no increase at all in the number of total projects.

On November 2, 2010, Mr. Jones (now billed as Founder and Chief Scientist Emeritus and as President of Capers Jones & Associates LLC) gave a presention called “SOFTWARE QUALITY IN 2010: A SURVEY OF THE STATE OF THE ART”. Here are the sources, once again from the second slide:

Data collected from 1984 through 2010
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://www.sqgne.org/presentations/2010-11/Jones-Nov-2010.pdf, accessed September 5, 2014)

Here three claims about the data have changed: 25 companies have been added to the data sources, 13,500 projects comprises the total set, and “about 50-75 projects” have been added (or are being added; this isn’t clear) per month. 21 full months have passed since the January presentation (which came at the end of the month). To be fair, the claim of an increase of 1,000 projects almost fits the lower bound of the claimed number of per-month increases (which would be 1,050 new projects since the last presentation), but not the claim of 75 per month (1,575 new projects). What does it mean to claim “new data = about 50-75 projects per month”, when the new data appears to be coming in a rate below the lowest rate claimed?

On May 1, 2012, Mr. Jones (CTO of Namcook Analytics LLC) gave a talk called “SOFTWARE QUALITY IN 2012: A SURVEY OF THE STATE OF THE ART”. Once again, the second slide provides the sources.

Data collected from 1984 through 2012
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://sqgne.org/presentations/2012-13/Jones-Sep-2012.pdf, accessed September 5, 2014)

Here there has been no change at all in any of the previous claims (except for the range of time over which the data has been collected). The claim that 50-75 projects per month has been added remains. At that rate, extrapolating from the claims in the November 2010 presentation, there should be between 14,400 and 14,850 projects in the data set. Yet the claim of 13,500 total projects also remains.

On August 18, 2013, Mr. Jones (now VP and CTO of Namcook Analytics LLC) gave a presentation “SOFTWARE QUALITY IN 2013: A SURVEY OF THE STATE OF THE ART”. Here are the data sources (from page 2)

Data collected from 1984 through 2013
• About 675 companies (150 clients in Fortune 500 set)
• About 35 government/military groups
• About 13,500 total projects
• New data = about 50-75 projects per month
• Data collected from 24 countries
• Observations during more than 15 lawsuits

(Source: http://namcookanalytics.com/wp-content/uploads/2013/10/SQA2013Long.pdf, accessed September 5, 2014)

Once again, no change in the total number of projects, but the claim of 50-75 new projects remains. Again, based on the 2012 claim, 15 months in time passed (more like 16, but we’ll be generous here), and the growth claims in these presentations, there should be between 14,250 and 14,625 projects in the data set.

Based on the absolute claim of 75 new projects per month in the period 2002-2008, and 50 per month in the remainder, we’d expect 20,250 projects at a minimum by 2013. But let’s be conservative and generous, and base the claim of new projects per month at 50 for the entire period from 2002 to 2013. That would be 600 new projects per year over 11 years; 6,600 projects added to 2002’s 12,000 projects, for a total of 18,600 by 2013. Yet the total number of projects went up by only 1,500 over the 11-year period—less than one-quarter of what the “new data” claims would suggest.

In summary, we have two sets of figures in apparent conflict here. In each presentation,

1) the project data set is claimed to grow at a certain rate (50-75 per month, which amounts to 600-900 per year).
2) the reported number of projects grows at a completely different rate (on average, 136 per year).

What explains the inconsistency between the two sets of figures?

I thank Laurent Bossavit for his inspiration and help with this project.

6 replies to “Facts and Figures in Software Engineering Research”

  1. Could he be trimming the dataset each year? A moving average? Could he sign up projects but never reach conclusion (abandonment rate)? Could he be hiding data that doesn’t agree with his conclusions?

    Michael replies: He could be doing any number of things. I’ve seen validity described as something like “the extent to which the researchers have ruled out alternative hypotheses” (going by my memory of Reliability and Validity in Qualitative Research. That’s why it’s so important for researchers to provide a description of the constructs and how they are operationalized (researcher-speak for, basically, how the measurements are made).

    Reply
  2. Hi Michael,

    After tallying it all up in Excel and trying to look at the numbers in various ways I realized that it might be easily be explained by a “forgotten” line item of “lost data” or “lost projects” which means that projects end for some reason and are no longer used in the statistics. This information seems to be missing from the fact list.

    If you added something like this to each entry, would it make more sense? (assuming my math is correct).

    2005 lost 2050
    2008 lost 2400
    2010 lost 700
    2012 lost 900
    2013 lost 750

    Another way of looking at would be looking at the data against the baseline of 2005 and not comparing to the previous statistics but always comparing to the first item for 2002. This gives you slightly different results but you still get an unexplained difference which could be resolved by listing the lost projects.

    -presentation to presentation- |23.07.2002 |02.05.2005 |20.01.2008 |02.11.2010 |01.05.2012 |18.08.2013
    total projects |12000 |12500 |12500 |13500 |13500 |13500
    rate of project addition per month |75 |75 |75 |50 |50 |50
    months between | |34 |32 |34 |18 |15
    expected projects based on growth rate | |14550 |14900 |14200 |14400 |14250
    difference between current and 1984 | |2050 |2400 |700 |900 |750

    -baseline 2002- |23.07.2002 |02.05.2005 |20.01.2008 |02.11.2010 |01.05.2012 |18.08.2013
    total projects |12000 |12500 |12500 |13500 |13500 |13500
    rate of project addition per month |75 |75 |75 |50 |50 |50
    months between | |34 |66 |100 |118 |133
    expected projects based on growth rate | |14550 |16950 |17000 |17900 |18650
    difference between current and 1984 | |2050 |4450 |3500 |4400 |5150

    Note: “months between” uses the excel calculation between years and months multiplied by the rate of project addition
    http://office.microsoft.com/en-us/excel-help/calculate-the-difference-between-two-dates-HP003056111.aspx#BMcalculate_the_number_of_months_betwee

    Note: Made the assumption of only 50 projects added for the last 3 entries.

    Cheers,
    Keith

    Michael replies: There are lots of potential explanations for the apparent anomalies. There are lots of potential explanations for the plausible data too. Alas, there’s very little explanation of the data at all.

    Reply
  3. “What explains the inconsistency between the two sets of figures?”
    I took this as a rhetorical question. Then I saw 2 responders attempt to answer it.

    Perhaps we’ll never know the true answer. But, I wonder if the answer has anything to do with “bad math”, at all.

    Reply
  4. In my first response, I focused only on the specific inconsistency and Michael’s question regarding the information on this one specific slide over time and completely ignored all of the other potential issues with the research. This would include the idea that even some of these things can even be countable or measured at all. Perhaps that means I ignored the context and fell into a trap. I did it in his class too!

    Michael replies: These things will happen. 🙂 On the other hand, based on the next bit of your reply here, you’re way ahead of practically everyone else; so take heart.

    I do think there are a lot more questions we should ask based on the referenced slides assuming the measurements that have been taken are even measurable. Most of the items require being put into a particular category which requires guessing. (Example: Is this defect a design or document issue or is this defect a coding or bad fix issue.) It is not clear based on the available information how this was done and what is behind it. The statistics seem to be based on this fuzzy information. Many of the measured items are examples of things that often don’t make sense to measure anyway since they don’t really provide any value (due to the guessing) and too many unknowns. As an exercise for the one slide it seems fun to try to figure it out, on looking further into detail, it probably doesn’t even make sense to bother. In the end I think the information is completely useless. Maybe I am missing something. Does anyone see any value in it? Does it tell us anything? Why should I care? I found some of the slides quite amusing, at least, so that was fun.

    Some people take the numbers, as presented, very seriously; where only a few are willing to begin the critical task of analyzing the claims, as you’ve done here. This makes me worried for my craft.

    Reply
  5. Stats are like stars, they are their for all to see but without any understanding of their distance, luminosity, chemical makeup, size and many other factors, they are just dots in the sky.

    As such any stats that are posted need to have attached all the pertinent information for informed, inquisitive and scientific people to analyse, so they can then verify by repeating the experiments.

    Michael replies: That’s all true. But now I wonder: when I showed that not a single figure in the table mentioned on this page had changed over 13 years, and questioned how this could be only at the end of the post, was that too subtle? Let me be more explicit: since 2000, we’ve seen all of these things: 9/11, the release of Windows XP, the Agile Manifesto, the arrival of context-driven testing, the iPhone, the arrival and departure of Windows Vista, Stack Exchange, 64-bit laptop computers, the financial crisis, the explosion of mobile devices, and not even one figure in that table changes? An explanation is warranted.

    Reply

Leave a Comment