DevelopsenseLogo

Very Short Blog Posts (17): Regression Obsession

Regression testing is focused on the risk that something that used to work in some way no longer works that way. A lot of organizations (Agile ones in particular) seem fascinated by regression testing (or checking) above all other testing activities. It’s a good idea to check for the risk of regression, but it’s also a good idea to test for it. Moreover, it’s a good idea to make sure that, in your testing strategy, a focus on regression problems doesn’t overwhelm a search for problems generally—problems rooted in the innumerable risks that may beset products and projects—that may remain undetected by the current suite of regression checks.

One thing for sure: if your regression checks are detecting a large number of regression problems, there’s likely a significant risk of other problems that those checks aren’t detecting. In that case, a tester’s first responsibility may not be to report any particular problem, but to report a much bigger one: regression-friendly environments ratchet up not only product risk, but also project risk, by giving bugs more time and more opportunity to hide. Lots of regression problems suggest a project is not currently maintaining a sustainable pace.

And after all, if a bug clobbers your customer’s data, is the customer’s first question “Is that a regression bug, or is that a new bug?” And if the answer is “That wasn’t a regression; that was a new bug,” do you expect the customer to feel any better?

Related material:

Regression Testing (a presentation from STAR East 2013)
Questions from Listeners (2a): Handling Regression Testing
Testing Problems Are Test Results
You’ve Got Issues

4 replies to “Very Short Blog Posts (17): Regression Obsession”

  1. I’ve worked on two different projects that were beset with a plague of regression bugs, with very different approaches and results.

    The first project had just about every stakeholder who knew what a regression bug was, pestering the testers to focus a lot of effort on regression testing. The reason for this was that the project had repeatedly demonstrated that regression bugs were common, which had a lot to do with pace, but also with the fragility of the architecture. Unfortunately, rather than spend time fixing the architecture, the problem was downloaded to the test team, which was ultimately a waste of time for testers AND programmers, since they had to fix all sorts of regression bugs that likely wouldn’t have occurred in the first place.

    The second project had a period where there were regression issues, so myself and the programmers sat down and talked about what was going on. We realized that the problem had to do with some code we had inherited that was very poorly duplicated (ie. instead of creating a function and then calling it repeatedly, they simply copied-and-pasted code over and over as needed), so they shifted focus and tackled the problem. Sure enough, while we have run occasional regression tests against that area, the issues haven’t reappeared in almost a year.

    tl;dr: Testers and programmers should both be involved in architecture discussions, because it can save everyone time in the long run.

    Michael replies: Instructive stories. Thank you.

    Reply
  2. While regression tests have demonstrable value, I always feel like it’s a paranoia driven activity rather than a curiosity driven activity.

    At my current company, we spend a large amount of resources on manually executing exhaustive regression tests, with a small team working on an automation framework (finally!). The rub is that these regression suites report a failure maybe once or twice a year(always a low risk, trivial bug), when they’re being run against every iteration of software (I get new code in our QA environment anywhere from 3 times per month to 3 times per week). Is there really any value in a regression suite that runs test cases that rarely fail? Or are we automating the machine to say “fine” when we ask how its feeling?

    Michael replies: There is a good deal of value, I think, in regression checks that rarely fail, especially when those checks are close to the code level, acting as “change detectors” as Cem Kaner has put it, providing rapid feedback to the programmers. At the testing end, though, I’d be happy if we got out of the scripted procedural test case business altogether. Although some things might need to be formalized sometimes—done in a specific way, or to check specific facts—it seems to me that writing programs for humans to perform holds a risk of overly focusing the testing work. Risk-based investigation of test ideas, yes; chartered scenario testing, yes; automation-assisted exploration (anywhere from the unit level to the system level), yes; automated checks, yes. Let automated checks address one set of possible regression problems; let human testers, investigating interactively, address a different set. And don’t overstructure the human work, unless you want to miss lots of problems.

    Reply

Leave a Comment