DevelopsenseLogo

How To Get What You Want From Testing (for Managers): The Q & A

On November 21, 2017, I delivered a webinar as part of the regular Technobility Webinar Series presented by my friend and colleague Peter de Jager. The webinar was called “How To Get What You Want from Testing (for Managers)”, and you can find it here.

Alas, we had only a one-hour slot, and there were plenty of questions afterwards. Each of the questions I received is potentially worthy of a blog post on its own. Here, though, I’ll try to summarize. I’ll give brief replies, suggesting links to where I might have already provided some answers, and offering an IOU for future blog posts if there are further questions or comments.

If the CEO doesn’t appreciate testing and QA enough in order to develop application of high quality how QA Manager can get battle?

First, in my view, testers should think seriously about whether they’re in the quality assurance business at all. We don’t run the business, we don’t manage the product, and we can’t assure quality. It’s our job to shine light on the product we’ve got, so that people who do run the business can decide whether it’s the product they want.

Second, the CEO is also the CQAO—the Chief Quality Assurance Officer. If there’s a disconnect between what you and the CEO believe to be important, at least one of two things is probably true: the CEO knows or values something you don’t, or you know or value something the CEO doesn’t. The knowledge problem may be resolved through communication—primarily conversation. If that doesn’t solve the disconnect, it’s more likely a question of a different set of values. If you don’t share those, your options are to adopt the CEO’s values, or to accept them, or to find a CEO whose values you do share.

How to change managers to focus on the whole (e.g. lead time) instead of testing metrics (when vendor X is coding and vendor Y is testing?

As a tester, it’s not my job to change a manager, as such. I’m not the manager’s manager, and I don’t change people. I might try to steer the manager’s focus to some degree, which I can do by pointing to problems and to risks.

Is there a risk associated with focusing on testing metrics instead of on the whole? My answer, based on experience, is Yes; many managers have trouble observing software and software development. If you also believe that there’s risk associated with metrics fixation, have you made those risks clear to the manager? If your answer to that question is “I’ve tried, but I haven’t been successful,” get in touch with me and perhaps I can be of help.

One way I might be able to help right away is to recommend some reading: “Software Engineering Metrics: What Do They Measure and How Do We Know” (Kaner and Bond); by Measuring and Managing Performance in Organizations, by Robert Austin; and by Jerry Weinberg’s Quality Management series, especially Quality Software Management, Vol. 2: First Order Measurement (also available as two e-books, “How to Observe Software Systems” and “Responding to Significant Software Events“). Those works outline the problems, and suggest some solutions.

An important suggestion related to this is to offer, to agree upon, and to live up to a set of commitments, along with an understanding of what roles involve.

How do you ensure testers put under the microscope the important bugs and ignore the trivial stuff?

My answers here include: training, continuous feedback, and continuous learning; continuous development, review, and refinement of risk models; providing testers with knowledge of and access to stakeholders that matter. Good models of quality criteria, product elements, and test techniques (the Heuristic Test Strategy Model is an example) help a lot, too.

Suggestions for the scalability of regression testing, specifically when developers say “this touches everything.”

When the developer says “this touches everything”, one interpretation is “I’m not sure what this touches” (since “this touches everything” is, at best, rarely true). “I’m not sure what this touches,” in turn, really means “I don’t actually understand what I’m modifying”. That interpretation (like others revealed in the “Testing without Machinery” chapter in Perfect Software and Other Illusions About Testing) points to a Severity 0 product and project risk.

So this is not simply a question about the scalability of regression testing. This is a question about the scalability of architecture, design, development, and programming. These are issues for the whole team—including managers—to address. The management suggestion is to steer things towards getting the developer’s model of the code more clear, refactoring the code and the design until it’s comprehensible.

I’ve done some presentations on regression testing before, and some blog posts here and here (although, ten years later, we no longer talk about “manual tests” and “automated tests”; we do talk about testing and checking.)

How can we avoid a lot of back and forth, e.g. submitting issues piecemeal vs. submitting in a larger set.

One way would be to practice telling a three-part testing story and delivering the news. That said, I’m not sure I have enough context to help you out. Please feel free to leave a comment or otherwise get in touch.

How much of what you teach can expand to testing in other contexts?

Plenty of it, I’d say. In Rapid Software Testing, we take an expansive view of testing; evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, and plenty of other stuff. That’s applicable to lots of contexts. We also teach applied critical thinking—that is, thinking about thinking with the intention of avoiding being fooled. That’s extensible to lots of domains.

I’m curious about the other contexts you might have in mind, and why you ask. If I can be of assistance, please let me know.

I agree 100% with everything you say, and I feel daily meetings are not working, because everybody says what they did and what they will do and there is no talk about the state of the product.

That’s a significant test result—another example of the sort of thing that Jerry Weinberg is referring to in the “Testing without Machinery” chapter in Perfect Software and Other Illusions About Testing. I’m not sure what your role is. As a manager, I would mandate reports on the status of the product as part of the daily conversation. As a tester, I would provide that information to rest of the team, since that’s my job. What I found about the product, what I did, and what happened when I did are part of that three-part testing story.

Perhaps test cases are used as they are quantifiable and helps organise the testing of parts of the product.

Sure. Email messages are also quantifiable and they also help to organise the testing of parts of the product. Yet we don’t evaluate a business by the number of email messages it produces. Let’s agree to think all the way through this.

“Writing” is not “factory work” either, but as you say, many, many tools help check spelling, grammar, awkward phrases, etc.

Of course. As I said in a recent blog post, “A good editor uses the spelling checker, while carefully monitoring and systematically distrusting it.” We do not dispute the power and usefulness of tools. We do remain skeptical about what will happen when tools are put in the hands of those who don’t have the skill or wisdom to apply them appropriately.

I appreciate the “testing/checking” distinction, but wish you had applied the labels in the other way, given the rise of ‘the checklist” as a crucial tool in both piloting and surgery—application of the checklist is NOT algorithmic, when done as recommended.

Indeed. I’m confident, though, that smart people can keep track of the differences between “checking” or “a check” and a “checklist”, just as they can keep track of the differences between “testing” or “a test” and “testimony”.

I would include “breaking” the product (to see if it fails gracefully and to find its limits).

I prefer to say “stressing” the product, and I agree absolutely with the goal to discover its limits, whether it fails gracefully, and what happens when it doesn’t.

“Breaking” is a common metaphor, to be sure. It worries me to some degree because of the public relations issue: ““The software was fine until the testers broke it.” “We could ship our wonderful product on time if only the testers would stop breaking it.” “Normal customers wouldn’t have problems our wonderful product; it’s just that the testers break it.” “There are no systemic management or development problems that have been leading to problems in the product. Nuh-uh. No way. The testers broke it.

What about problems with the customer/user? Uninformed use or, even more problematical, unexpected use (especially if many users do the unexpected — using Lotus 123 as a word processor, for instance). I would argue you need to “model the human (including his or her context of machine, attempted use, level of understanding, et al.).

And I would not argue with you. I’d agree, especially with respect to the tendency of users to do surprising things.

I think stories about the product should also include items about what the product did right (or less bad than expected), especially items that were NOT the focus of the design (ie, how well does Lotus 123 work as a word processor?).

They could certainly include that to some degree. After all, the first items in our list of what to report in the product story are what it is and what it does, presumably successfully. Those things are important, and it’s worth reporting good news when there’s some available. My observation and experience suggests that reporting the good news is less urgent than noting what the product doesn’t do, or doesn’t do well, relative to what people want it to do, or hope that it does. It’s important, to me, that the good news doesn’t displace the important bad news. We’re not in the quality reassurance business.

So far, you appear to be missing much of the user side– what will they know, what context will they be in, what will they be doing with the product? (A friend was a hero for getting over-sized keyboards for football players to deal with all the typing problems that appeared in the software but were the consequence of huge fingers trying to hit normal-sized keytops.

That’s a great story.

I am really missing any mention of users, especially in a age of sensitivity to disabilities like lack of sight, hearing, manual dexterity, et al. It wasn”t until 11:45 or so that I heard about “use” and “abuse.” Again, all good stuff, but missing a huge part of wht I think is the source of risk.

I was mostly talking to project managers and their relationships with testers in this talk (and at that, I went over time). In other talks (and especially when talking to testers), I would underscore the importance of modeling the end user’s desires and actions.

Please in the blog, talk about risk from unexpected use (abuse and unusual uses).

The short blog post that I mentioned above talks about that a bit. Here is a more detailed one. Lately, I’ve been enjoying Gojko Adzic’s book Humans vs. Computers, and can recommend it.

1 reply to “How To Get What You Want From Testing (for Managers): The Q & A”

  1. I applaud you Michael for continuing to explain the blatantly obvious to new folks. I admire that quality in knowledgeable, patient people, who continue to repeat themselves.

    This comment should be received with kindness and admiration.

    Reply

Leave a Comment