DevelopsenseLogo

Testing, Checking, and Convincing the Boss to Explore

How is it useful to make the distinction between testing and checking? One colleague (let’s call him Andrew) recently found it very useful indeed. I’ve been asked not to reveal his real name or his company, but he has very generously permitted me to tell this story.

He works for a large, globally distributed company, which produces goods and services in a sector not always known for its nimbleness. He’s been a test manager with the company for about 10 years. He’s had a number of senior managers who have allowed him and his team to take an exploratory approach, almost a skunkworks inside the larger organization. Rather than depending on process manuals and paperwork, he manages by direct interaction and conversation. He hires bright people, trains them, and grants them a fairly high degree of autonomy, balanced by frequent check-ins.

Recently, on a Thursday the relatively new CEO came to town and held an all-hands meeting for Andrew’s division. Andrew was impressed; the CEO seemed genuinely interested in cutting bureaucracy and making the organization more flexible, adaptable, and responsive to change. After the CEO’s remarks, there was a question-and-answer period. Andrew asked if the company would be doing anything to make testing more effective and more efficient. The CEO seemed curious about that, and jotted down a note on a piece of paper. Andrew was given the mandate of following up with the VP responsible for that area.

Late that afternoon, Andrew called me. We chatted for a while on the phone. He hadn’t read my series on testing vs. checking, but he seemed intrigued. I suggested that he read it, and that we get together and talk about it.

As luck would have it, there was occasion to bring a few more people into the picture. That weekend, we had a timely conversation with Fiona Charles who reminded us to focus the issue of risk. Rob Sabourin, happened to be visiting on Saturday evening, so he, Andrew, and I sat down to compose a letter to the VP. Aside from changing the names that would identify the parties involved, this is an unedited version what we came up with:

[Our Letter]

Dear [Madam VP]…

[Mr. CEO] asked me to send you this email as a follow up to a question that I posed during his recent trip to the [OurTown] office on [SomeDate] on the current state of the testing at [OurCompany] and how our testing effectiveness should be improved.

The [OurTown]-based [OurDivision] test team has been very successful in finding serious issues with our products with a fairly small test team using exploratory test approaches. As an example, a couple of weeks ago one of my testers found a critical error in an emergency fix within his two days of exploratory testing in a load that had passed four person-weeks of regression testing (scripted checking) by another team.

Last week a Project Lead called me and asked if my team could perform a regression sweep on a third party delivery. I replied that we could provide the requested coverage with two person-days of effort without disrupting our other commitments. He seemed surprised and delighted. He had come to us because [OurCompany]’s typical approach yielded a four-to-six person-week effort which would have caused a delay in the project’s subsequent release.

Our experience using exploratory testing in [OurDivision] has demonstrated improved flexibility and adaptability to respond to rapid changes in priorities.

Testing is not checking. Checking is a process of confirmation, validation, and verification in which we are comparing an output to a predicted and expected result. Testing is something more than that. Testing is a process of exploration, discovery, investigation, learning, and analysis with the goal of gathering information about risks, vulnerabilities, and threats to the value of the product. The current effectiveness of many groups’ automated scripts is quite excellent, yet without supplementing these checks with “brain-engaged” human testing we run the risk of serious problems in the field impacting our customers, and the consequential bad press that follow these critical events.

At [OurCompany] much of our “testing” is focused on checking. This has served us fairly well but there are many important reasons for broadening the focus of our current approach. While checking is very important, it is vulnerable to the “pesticide paradox”. As bacteria develop a resistance to antibiotics, software bugs are capable of avoiding detection by existing tests (checks), whether executed once or repeated over and over. In order to reduce our vulnerability to field issues and critical customer incidents, we must supplement our existing emphasis on scripted tests (both manual and automated) with an active search for new problems and new risks.

There are several strong reasons for integrating exploratory approaches into our current development and testing practices:

  • Scripted tests are perceived to be important for compliance with [regulatory] requirements. They are focused on being repeatable and defensible. Mere compliance is insufficient—we need our products to work.
  • Scripted checks take time and effort to design and prepare, whether they are run by machines or by humans. We should focus on reducing preparation cost wherever possible and reallocating that effort to more valuable pursuits.
  • Scripted checks take far more time and effort to execute when performed by a human than when performed by a machine. For scripted checks, machine execution is recommended over human execution, allowing more time for both human interaction with the product, and consequent observation and evaluation.
  • Exploratory tests take advantage of the human capacity for recognizing new risks and problems.
  • Exploratory testing is highly credible and accountable when done well by trained testers. The findings of exploratory tests are rich, risk-focused, and value-centered, revealing far more knowledge about the system than simple pass/fail results.

The quality of exploratory testing is based upon the skill set and the mindset of the individual tester. Therefore, I recommend that testers and managers across the organization be trained in the structures and disciplines of excellent exploratory testing. As teams become trained, we should systematically introduce exploratory sessions into the existing testing processes, observing and evaluating the results obtained from each approach.

I have been actively involved in improving testing, in general, outside of [OurCompany]. I am on the board of a testing association and I have been attending, organizing and facilitating meetings of testers for many years.

During this time, I have been exposed to much of the latest developments in software testing and I have led the implementation of Session Based Exploratory Testing within my department. In addition, over the past four years, I have been providing instruction in software testing both to the testers within my business unit and to companies outside of [OurCompany].

I look forward to the opportunity to talk with you about this further.

[/Our Letter]

Now, I thought that was pretty strong. But the response was far more gratifying than I expected. Andrew sent the message on Sunday afternoon. The VP responded by 8:45am on Monday morning. Her reply was in my mailbox before 10:00am. The reply read:

[The VP’s Reply]

Dear Andrew

Thanks very much for the email. I find this very intriguing! I believe the distinction you make between testing and checking is quite insightful and I would like to connect with you to see how we can build these concepts and techniques into our quality management services as well as my central team verification tests. I will get a call together with [Mr. Bigwig] and [Mr. OtherBigwig] so that we can figure out the best way to incorporate your ideas. Again, many thanks!!

[/The VP’s Reply]

A couple of key points:

  • The letter was much stronger thanks to collaboration. Any one of the four of us could have written a good letter; the result was better than any of us could have done on our own.
  • The letter is sticky, in the sense that Chip and Dan Heath talk about in their book Made to Stick: Why Some Ideas Survive and Others Die. It’s not a profound book, but it contains some useful points to ponder. The letter starts with two stories that are simple, unexpected, concrete, credible, and emotional (remember, the product manager was surprised and delighted). Those initials can be rearranged to SUCCES, which is the mnemonic that the Heaths use for successful communication.
  • The testing vs. checking distinction is simpler and memorable than “exploratory approaches” vs. “confirmatory scripted approaches”. The explanation is available (and in most cases necessary), but “testing” and “checking” roll off the tongue quickly after the explanation has been absorbed.
  • We managed to hit some of the most important aspects of good testing: cost vs. value, risk focus, diversification of approaches, flexibility and adaptability, and rapid service to the larger organization.

After reviewing this post, Andrew said, “I like the post a lot. Let’s hope we end up helping a lot of people with it.” Amen. You are, of course, welcome to use this letter as a point of departure for your own letter to the bigwigs. If you’d like help, please feel free to drop me a line.

See more on testing vs. checking, but especially this.

11 replies to “Testing, Checking, and Convincing the Boss to Explore”

  1. I can't help but think that Andrew works at a place I used to…

    The letter is excellent whereby it explains the difference between testing and checking in non-tester's terms, describes the positive impact ET has already had on a small scale and how the organization can improve by increasing the use of ET.

    I'm planning to forward this post to a few people I know. Nice work!

    Reply
  2. Brilliant letter, it has made a couple of loose connections in my brain finally fuse together into a larger picture (sorry for the rather poor mixing of metaphors there).

    I will now read up on checking verses testing, very useful indeed.

    Thanks

    Richard Hunter[insert own joke on my name here, a small prize to the person who comes up with a new one 🙂 )

    Reply
  3. "As bacteria develop a resistance to antibiotics, software bugs are capable of avoiding detection by existing tests (checks), whether executed once or repeated over and over."
    Frankly, this analogy looks very abusive.
    Why wouldn't you just say that automated functional testing fits for coverage of stable functionalities only, that were previously manually explored, while automated unit testing is non-functional, and thus, is not sufficient enough by definition?

    So, automated/automatic testing will always be playing secondary role, because all the new functionalities must be tested manually, and exploratory testing methodology fits best here.

    Thank you,
    Albert Gareev

    Reply
  4. Michael,

    Excellent post!!!
    Few days ago I have researched your posts about testing vs checking and they are very useful posts and the comments in the posts also are greats.
    Now, this post is a summary about that idea.
    I really appreciate this kind of information, thanks!!!
    Best Regards,
    José Pablo Sarco

    Reply
  5. This is one of the most useful posts I've read for quite some time.

    I've been quite successful incorporating agile techniques into a strongly structured environment and am working on convincing executive management to take that "final step". Our regression test set is beooming a cumbersome, ginormous albatross that has served its purpose and now needs to be moved out of the way (reduction, automation, etc.) to allow focus on new testing, which finds the majority of our errors.

    I have always had difficulties "selling" exploratory testing here in the conservative Midwest, although everyone will tune into "agile". I believe this EMail explains the very issues I've been dealing with both succinctly and intelligently, and since the concepts are explained in yet a different way, the additional ammo will be invaluable to me as we plan our strategies for 2010. The EMail was masterfully written.

    I appreciate the sharing of a real life situation, particularly one which involves negotiating change with executive management. It's hard to find help in that arena.

    Thanks!
    – Linda

    Reply
  6. I *just* (<15 mins ago) engaged in this same 'testing vs checking' argument with a project manager here: justifying manual testing in addition to running automated scripts.

    I believe I'll be making use of 'Andrew's' letter to at least some degree.

    Great post.

    Reply
  7. Hi Michael,

    I'm working with a business mentor who has really latched onto the concept of exploratory testing as a selling point. He feels that when most people talk about testing they associate it with this concept of 'checking'.
    He got excited about this idea of providing 'investigative testing' I'm really glad this post has come about. I will take on board a bit of this and it will be in my business plan.

    many thanks

    Anne-Marie

    Reply
  8. Hi Michael!

    I have been thinking a lot about topic of this blog post since I attended your Rapid Software Testing course held at 2010 STP pre conference days. This blog entry helped me to direct and shape my thought. Checking is a necessary thing that I have to do at my working place. We are able to automate most of our test cases. Our previous checking process was as follow:
    1. Test team write the test plan based on quite straight forward requirements (test for HL7 protocol).
    2. Design team write code based on same requirements.
    3. Test team write test scripts.
    4. Test suite is run against the product builded and deployed by test team.
    At this point in time schedule we were always on time.
    The time schedule always started to slip from this point in time.

    Based on this experience we changed the checking process.
    1. Test team write the test plan. Test plan is immediately presented to design team.
    2. Test team create subset of test scripts to cover only part of checking plan.
    3. Design team created code that did not reveal any checking error.
    4. Test team has more time (e.g. to start exploratory testing or real testing)

    Point of this replay is to explain that necessary checking of product could be done in efficient way by better communication of test and design team.
    It is important to say that test team come to this modified checking process by asking simple question to design team:
    “What will help you to become more efficient in creating the product code?”
    The answer was simple:”Could you show us your test plan please?”

    Reply

Leave a Comment