Blog Posts for the ‘Quality’ Category

“Why Didn’t We Catch This in QA?”

Thursday, August 13th, 2020

My good friend Keith Klain recently posted this on LinkedIn:

“Why didn’t we catch this in QA” might possibly be the most psychologically terrorizing and dysfunctional software testing culture an organization can have. I’ve seen it literally destroy good people and careers. It flies in the face of systems thinking, complexity of failure, risk management, and just about everything we know about the psychology involved in testing, but the bully and blame culture in IT refuses to let it die…”

There’s a lot to unpack here. Let’s start with this: what is “QA”?

If “QA” is quality assurance, then it’s important to figure out who, or what, assures quality—value to some person(s) who matter(s).

Confusion abounds when “QA” is used as a misnomer for testing. Testing is not quality assurance, though it can inform quality assurance. Testing does not assure quality, no more than diagnosis assures good health.

In terms of health, there’s no question that we want good diagnoses so that we can become aware of particular pathologies or diseases. If we’re in poor health, and we’re not aware of it, and diagnosis doesn’t catch it, it’s reasonable to ask why not, so that we can improve the quality of diagnosis. The unreasonableness starts when someone foolishly believes that diagnosis is infallible, or that it assures good health, or that it prevents disease—like believing that lab technicians and epidemiologists are responsible for COVID-19, or for its spread.

Once again, it is high time that we dropped the idea that testing is quality assurance. Who perpetuates this? Everyone, so it seems, and it’s not a new problem. At very least, it would be a great idea if testers stopped using the label to describe themselves. As long as testers persist in calling themselves “QA”, the pandemic of ignorance and blame will continue.

What, or who, does assure quality, then?

In one sense, everyone who performs work has agency or authority over it, which includes an implicit responsibility to assure its quality, just as everyone is responsible to maintain the health of his or her mind and body. Assuring the quality of our work a matter of craft; self-awareness; diligence; discipline; professionalism; and duty of care towards ourselves, our clients and our social groups. If we’re adults, no one else is responsible for washing our hands.

In everyday life, we make choices about lifestyle, diet, and hygiene that influence our health and safety. As adults, those choices, whether wise or reckless, are our responsibility. At work, our agency affords freedom and responsibility to push back or ask for help when we’re pressed to do work in a way that might compromise our own sense of quality. And our agency enables us to leave any situation in which we are required to behave in ways that we consider unprofessional or unethical.

Part of maintaining personal health is maintaining awareness of it. That means asking ourselves how we feel, and soliciting the help of others who can sometimes help us become aware of things that we don’t see, like personal trainers, doctors, or counsellors. Similarly, assuring quality in our work involves evaluating it—often with the help of other people—to become aware of its state, and in particular, its limitation and problems.

Other people might help us, but as authors of our own work, we are responsible for making those evaluations, and we are responsible for what we do based on those evaluations. Choices that bear on our health, or on the quality of our work, are ours to make.

So, in this sense, “why didn’t we catch this in QA?” would mean “why did we not assure the quality of our own work?” And at the centre of that “we” is “I”.

In another sense, responsibility for the quality of work and workplace resides in the management role. While we’re responsible for washing our hands, management is responsible for providing an environment where handwashing is possible—and for ensuring that people aren’t pushed into conditions where they’re endangering themselves, each other, or the business.

Insofar as management engages people to do work and make products, management is responsible for determining what constitutes quality work, and deciding whether the product has met its goals. Management decides whether the product it’s got is the product it wants—and the product it wants to ship. Management can ask testers to learn about the product on management’s behalf, but management is ultimately responsible for assuming the risk of unknown problems in the product.

Management is responsible for setting the course; for co-ordinating people; for marshaling resources; for setting policy; for providing help when it’s needed; for listening and responding and acting appropriately when people are pushing back. While testers help management to become aware of the status of the product, management is responsible for evaluating the quality of the work and the workplace, and for deciding (based on information from everyone, not only testers) whether the work is ready for the outside world.

Management assures quality by creating the conditions that make it possible for people to assure the quality of their own work. And management fails to assure quality when it sets up conditions that make quality assurance impossible, or that undermine it. In that case, “why didn’t we catch this in QA?” would mean “why didn’t management assure the quality of the work for which it is responsible?”

When people get sick, it’s reasonable to ask how people got sick. It’s reasonable to ask what they might need and what they might do to take better care of themselves. It’s also reasonable to ask if government is providing sufficient support for individual health, public health, and public health workers. It’s even reasonable to ask how better epidemiology and diagnosis could help to sound the alarm when people and populations aren’t healthy. It’s not reasonable to put responsibility for personal or public health on the epidemiologists and diagnosticians and lab techs.

So “Why didn’t we catch this in QA?” is a fine question to ask when it means “Why did we not assure the quality of our own work?” or “Why didn’t management assure the quality of the work for which it is responsible?” But don’t mistake testing for quality assurance, and don’t mistake the question for “Why didn’t testers assure the quality of the product?” And if you’re a tester, and being asked the latter question, reframe it to refer to the previous two.

Want to learn how to observe, analyze, and investigate software? Want to learn how to talk more clearly about testing with your clients and colleagues? Rapid Software Testing Explored, presented by me and set up for the daytime in North America and evenings in Europe and the UK, November 9-12. James Bach will be teaching Rapid Software Testing Managed November 17-20, and a flight of Rapid Software Testing Explored from December 8-11. There are also classes of Rapid Software Testing Applied coming up. See the full schedule, with links to register here.

I Represent the User! And We All Do

Saturday, December 15th, 2018

As a tester, I try to represent the interests of users. Saying the user, in the singular, feels like a trap to me. There are usually lots of users, and they tend to have diverse and sometimes competing interests. I’d like to represent and highlight the interests of users that might have been forgotten or overlooked.

There’s another trap, though. As Cem Kaner has pointed out, it’s worth remembering that practically everyone else on the team represents the interests of end users in some sense. “End users want this product in a timely way at a reasonable price, so let’s get things happening on schedule and on budget,” say the project managers. “End users like lots of features,” say the marketers. “End users want this specific feature right away,” say the sales people. “End users want this feature optimized like I’m making it now,” say the programmers. I’d be careful about claiming that I represent the end user—and especially insinuating that I’m the only one who does—when lots of other people can credibly make that claim.

Meanwhile, I aspire to test and find problems that threaten the value of the product for anyone who matters. That includes anyone who might have an interest in the success of the product, like managers and developers, of course. It also includes anyone whose interests might have been forgotten or neglected. Technical support people, customer service representatives, and documentors spring to mind as examples. There are others. Can you think of them? People who live in other countries or speak other languages, whether they’re end users or business partners or colleagues in remote offices, are often overlooked or ignored.

All of the people in our organization play a role in assuring quality. I can assure the quality of my own work, but not of the product overall. For that reason, it seems inappropriate to dub myself and my testing colleagues as “quality assurance”. The “quality assurance” moniker causes no end of confusion and angst. Alas, not much has changed over the last 35 years or so: no one, including the most of the testing community, seems willing to call testers what they are: testers.

That’s a title I believe we should wear proudly and humbly. Proudly, because we cheerfully and diligently investigate the product, learning deeply about it where most others merely prod it a little. Humbly, because we don’t create the product, design it, code it, or fix it if it has problems. Let’s honour those who do that, and not make the over-reaching claim that we assure the quality of their work.

Very Short Blog Posts (30): Checking and Measuring Quality

Monday, November 14th, 2016

This is an expansion of some recent tweets.

Do automated tests (in the RST namespace, checks) measure the quality of your product, as people sometimes suggest?

First, the check is automated; the test is not. You are performing a test, and you use a check—or many checks—inside the test. The machinery may press the buttons and return a bit, but that’s not the test. For it to be a test, you must prepare the check to cover some condition and alert you to a potential problem; and after the check, you must evaluate the outcome and learn something from it.

The check doesn’t measure. In the same way, a ruler doesn’t measure anything. The ruler doesn’t know about measuring. You measure, and the ruler provides a scale by which you measure. The Mars rovers do not explore. The Mars rovers don’t even know they’re on Mars. Humans explore, and the Mars rovers are ingeniously crafted tools that extend our capabilities to explore.

So the checks measure neither the quality of the product nor your understanding of it. You measure those things—and the checks are like tiny rulers. They’re tools by which you operate the product and compare specific facts about it to your understanding of it.

Peter Houghton, whom I greatly admire, prompted me to think about this issue. Thanks to him for the inspiration. Read his blog.

Taking Severity Seriously

Wednesday, January 14th, 2015

There’s a flaw in the way most organizations classify the severity of a bug. Here’s an example from the Elementool Web site (as of 14 January, 2015); I’m sure you’ve seen something like it:

Critical: The bug causes a failure of the complete software system, subsystem or a program within the system.
High: The bug does not cause a failure, but causes the system to produce incorrect, incomplete, inconsistent results or impairs the system usability.
Medium: The bug does not cause a failure, does not impair usability, and does not interfere in the fluent work of the system and programs.
Low: The bug is an aesthetic (sic —MB), is an enhancement (ditto) or is a result of non-conformance to a standard.

These are serious problems, to be sure—and there are problems with the categorizations, too. (For example, non-conformance to a medical device standard can get you publicly reprimanded by the FDA; how is that low severity?) But there’s a more serious problem with models of severity like this: they’re all about the system as though no person used that system. There’s no empathy or emotion here; there’s no impact on people. The descriptions don’t mention the victims of the problem, and they certainly don’t identify consequences for the business. What would happen if we thought of those categories a little differently?

Critical: The bug will cause so much harm or loss that customers will sue us, regulators will launch a probe of our management, newspapers will run a front-page story about us, and comedians will talk about us on late night talk shows. Our company will spend buckets of money on lawyers, public relations, and technical support to try to keep the company afloat. Many capable people will leave voluntarily without even looking for a new job. Lots of people will get laid off. Or, the bug blocks testing such that we could miss problems of this magnitude; go back to the beginning of this paragraph.

High: The bug will cause loss, harm, or deep annoyance and inconvenience to our customers, prompting them to flood the technical support phones, overwhelm the online chat team, return the product demanding their money back, and buy the competitor’s product. And they’ll complain loudly on Twitter. The newspaper story will make it to the front page of the business section, and our product will be used for a gag in Dilbert. Sales will take a hit and revenue will fall. The Technical Support department will hold a grudge against Development and Product Management for years. And our best workers won’t leave right away, but they’ll be sufficiently demoralized to start shopping their résumés around.

Medium: The bug will cause our customers to be frustrated or impatient, and to lose faith in our product such that they won’t necessarily call or write, but they won’t be back for the next version. Most won’t initiate a tweet about us, but they’ll eagerly retweet someone else’s. Or, the bug will annoy the CEO’s daughter, whereupon the CEO will pay an uncomfortable visit to the development group. People won’t leave the company, but they’ll be demotivated and call in sick more often. Tech support will handle an increased number of calls. Meanwhile, the testers will have—with the best of intentions—taken time to investigate and report the bug, such that other, more serious bugs will be missed (see “High” and “Critical” above). And a few months later, some middle manager will ask, uncomprehendingly, “Why didn’t you find that bug?”

Low: The bug is visible; it makes our customers laugh at us because it makes our managers, programmers, and testers look incompetent and sloppy—and it causes our customers to suspect deeper problems. Even people inside the company will tease others about the problem via grafitti in the stalls in the washroom (written with a non-washable Sharpie). Again, the testers will have spent some time on investigation and reporting, and again test coverage will suffer.

Of course, one really great way to avoid many of these kinds of problems is to focus on diligent craftsmanship supported by scrupulous testing. But when it comes to that discussion in that triage meeting, let’s consider the impact on real customers, on the real people in our company, and on our own reputations.

Why Would a User Do THAT?

Monday, March 4th, 2013

If you’ve been in testing for long enough, you’ll eventually report or demonstrate a problem, and you’ll hear this:

“No user would ever do that.”

Translated into English, that means “No user that I’ve thought of, and that I like, would do that on purpose, or in a way that I’ve imagined.” So here are a few ideas that might help to spur imagination.

  • The user made a simple mistake, based on his erroneous understanding of how the program was supposed to work.
  • The user had a simple slip of the fingers or the mind—inadvertently pasting a letter from his mother into the “Withdrawal Amount” field.
  • The user was distracted by something, and happened to omit an important step from a normal process.
  • The user was curious, and was trying to learn about the system.
  • The user was a hacker, and wanted to find specific vulnerabilities in the system.
  • The user is confused by the poor affordances in the product, and at that point was willing to try anything to get his task accomplished.
  • The user was poorly trained in how to use the product.
  • The user didn’t do that. The product did that, such that the user appeared to do that.
  • Users actually do that all the time, but the designer didn’t realize it, so product’s design is inconsistent with the way the user actually works.
  • The product used to do it that way, but to the user’s surprise now does it this way.
  • The user was looking specifically for vulnerabilities in the product as a part of an evaluation of competing products.
  • The product did something that the user perceived as unusual, and the user is now exploring to get to the bottom of it.
  • The user did that because some other vulnerability—say, a botched installation of the product—led him there.
  • The user was in another country, where they use commas instead of periods, dashes instead of slashes, kilometres instead of miles… Or where dates aren’t rendered the way we render them here.
  • The user was testing the product.
  • The user didn’t realize this product doesn’t work the way that product does, even though the products have important and relevant similarities.
  • The user did that, prompted by an error in the documentation (which in turn was prompted by an error in a designer’s description of her intentions).
  • To the designer’s surprise, the user didn’t enter the data via the keyboard, but used the clipboard or a programming interface to enter a ton of data all at once.
  • The user was working for another company, and was trying to find problems in an active attempt to embarrass the programmer.
  • The user observed that this sequence of actions works in some other part of the product, and figured that the same sequence of actions would be appropriate here too.
  • The product took a long time to respond, the user got impatient, and started doing other stuff before the product responded to his earlier request.

And I’m not even really getting started. I’m sure you can supply lots more examples.

Do you see? The space of things that people can do intentionally or unintentionally, innocently or malevolently, capably or erroneously, is huge. This is why it’s important to test products not only for repeatability (which, for computer software, is relatively easy to demonstrate) but also for adaptability. In order to do this, we must do much more than show that a program can produce an expected, predicted result. We must also expose the product to reasonably foreseeable misuse, to stress, to the unexpected, and to the unpredicted.

Exegesis Saves (Part 3) Beyond the Bromides

Sunday, January 23rd, 2011

Over the last few blog posts, some colleagues and I have been analyzing this sentence:

“In successful agile development teams, every team member takes responsibility for quality.”

Now, in one sense, it’s unfair for me to pick on this sentence, because I’ve taken it out of context. It’s not unique, though; a quick search on Google reveals lots of similar sentences:

“Agile teams work in a more collaborative and open manner which reduces the need for documentation-heavy, bureaucratic approaches. The good news is that they have a greater focus on quality-oriented, disciplined, and value-adding techniques than traditionalists do. However, the challenge is that the separation of work isn’t there any more—everyone on the team is responsible for quality, not just ‘quality practitioners’. This requires you to be willing to work closely with other IT professionals, to transfer your skills and knowledge to them and to pick up new skills and knowledge from them.”

“In Agile software development, the whole team is responsible for quality, but there are many barriers to accomplishing that goal.”

“So what does testing now need to know and do to work effectively within a team to deliver a system using an agile method? The concept of ‘the team being responsible for quality’ i.e. ‘the whole team concept’ and not just the testing team, is a key value of agile methods. Agile methods need the development team writing Unit tests and/or following Test First Design (TDD) practices (don’t confuse TDD as a test activity as in fact it is a mechanism to help with designing the code). The goal here is to get as much feedback on code and build quality as early as possible.””> (site no longer findable on DNS —MB 2017-12-05)

“As we have seen quality is the responsibility of every team member in an agile team, not just the developers. Every team member has a role to play in building quality in.”

“The responsibility for quality was shifted to the whole team. Each of the different roles is responsible for doing some form of testing. Programmers, architects, analysts, and even managers are all intimately involved with testing activities and work closely together to achieve quality goals.”

“In traditional systems, the responsibility for quality is mainly delegated to testing teams that must make sure the code is of high quality. Agile thinking makes quality a collective responsibility of the customer, the developers and the testers all the time from the first, to the last minute of a project. The customer is involved in quality by defining acceptance tests. The developers are involved in quality by helping the customers write the tests, by writing unit tests for all the production code they write and the testers are involved by helping the developers automate acceptance (customer) tests and by extending the suite of automated tests.”

“I’m an advocate of any methodology that empowers engineers to work toward higher standards of software integrity. Agile and test-driven methodologies have found a way to do that by distributing ownership and responsibility for quality. And with so many ways to make static analysis boost the type of automated testing this requires, agile is a better formula for better code that can keep up with shorter scrum cycles and produce frequently “potentially shippable” products.”

“This then leads back to my original question. Who is responsible for quality? The answer is everyone.”

“The corollary to this rule is that testers cannot be responsible for quality; developers must be. The Agile methods put the responsibility for quality precisely where it belongs, with the developers.”

“In a successful team, every member feels responsible for the quality of the product. Responsibility for quality cannot be delegated from one team member to another team member or function. Similarly, every team member must be a customer advocate, considering the eventual usability of the product throughout its development cycle. Quality has to be built into the plans and schedules. Use bug allotments, iterations devoted to fixing bugs, to bring down bug debt. This will reduce velocity which may provide enough slack in future iterations to reduce bug rates.”

So the sentence that I quoted is by no means unique. Here’s the source for it. It’s an article by Lisa Crispin and Janet Gregory. You can read the rest of the article, and make your own decisions about whether the rest of it is helpful. You may decide (and I’d agree with you) that there are several worthy points in the article. You may decide that I’ve been unfair (and I’d agree with you) in excerpting the sentence without the rest of the article, especially considering that until now I’ve ignored the sentence that follows immediately, “This means that although testers bring special expertise and a unique viewpoint to the team, they are not the only ones responsible for testing or for the quality of the product.”

The latter sentence certainly clarifies the former. I would argue (and this is why I brought the whole matter up) that, in context, the second sentence renders the first unnecessary and even a distraction. Please note: my intention was not to critique the article, nor to launch a personal attack on Lisa and Janet. As I’ve said, I have no doubt that Lisa and Janet mean well, and their article contains several worthy points. My goal instead was to test this often-uttered tenet of agile lore. That’s why I didn’t reveal the source of the sentence initially; the article itself is not at issue here.

Yet the sentence, found in so many similar forms, is a bromide. My Oxford Dictionary of English says that a bromide is “a trite statement intended to soothe or placate”. As philosophers might say, the sentence doesn’t do any real explanatory work; it doesn’t make any useful distinctions. Even in context, it’s often unclear as to whether the sentence is intended be descriptive (“this is the way things are”) or normative (“this is the way things should be”).

To highlight my greatest misgivings about the slogan, let’s restate it, replacing the word “quality” with Jerry Weinberg’s definition of it:

“In successful agile development teams, every team member takes responsibility for value to some person(s).”

Every time I see the whole-team-responsible-for-quality trope, I find myself wondering responsibility to whom, quality according to whom, and what it means to take responsibility. Figuring out what we intend to achieve, and how we are to achieve it, are among the harder problems of software development. (One can see this being played out in the Craftsmanship Skirmishes that are going on as I write.)

Here’s what would make the conversation more meaningful to me. I suggest that we who develop and write about software…

  • Consider quality not as something simple, objective, and abstract, but as something messy, subjective and very human. Quality isn’t a thing, but rather a complex set of relationships between products, people, and systems. As such, quality shouldn’t be swept under the rug of a vague slogan.
  • Remember that there are as many ways to think about quality as there are people to think about them, and that in a software development project, people’s interests will clash from time to time. Those conflicts will require some people to concede certain values in favour of other values. As Jerry Weinberg has pointed out, decisions about quality are political and emotional. Sorting out those conflicts will require us to identify who has the power to make the decision, and to contribute to empowering that person when appropriate. That in turn will require us to develop skill, courage, and confidence, tempered with patience and humility.
  • Apropos of the last point, we must keep in mind that teams are collections—or systems—of individuals. it’s a nice idea to think that the whole team makes decisions collectively, but in reality some person—the product owner—must be granted the ultimate authority to make a decision. If the team is to be successful, there’s an implicit contract: the product owner must enable, trust and empower the members of the team to design and perform their work collaboratively, in the best ways that they know how; and the members of the team must inform, trust and empower the product owner, such that she can make the best decisions possible.
  • Since quality means “value to some person(s)”, be specific about what particular dimension of value we mean, and to whom. For example, in a particular context, if we want “quality” to mean “bug prevention,” let’s say precisely that. Then let’s recognize the ways in which certain approaches towards preventing bugs might represent a threat to someone’s current interests or to their personal safety rules. If, in a particular context, we want quality to mean “problems solved for customers”, let’s say precisely that. Then let’s recognize that there are many approaches to solving problems, and that some problems might be solved by writing less software, not more. If we want “quality” to mean “many features in a product”, let’s say precisely that. Then let’s recognize how “many features” can satisfy some people—while adding complexity, development time, and expense to a product, thereby confusing and annoying other people. In other words, let’s use the word “quality” in a more careful way, which starts with deciding whether we want to use the word at all.
  • Since quality is a complex notion, we must learn not only to declare quality in terms of things that we expect and prescribe. Those things are important, to be sure. Yet we must also prepare to seek the unexpected, to make discoveries, and to respond to change.
  • Rather than striving for influence (which to me has echoes of the quality assurance or quality control mindset), testers in particular must strive to develop understanding and awareness of the system that we’re being asked to test. To me, that’s our principal product: learning about the system and reporting what we’ve learned to help the project community to gain greater understanding of the system, the problems, and the risks.
  • Consider responsibility not in terms of something people take, but as something that people accept, and also as something that people grant, share, and extend to one another. To me, that means contributing to an environment in which everyone is empowered to create and defend the value of each other’s work. That includes offering, asking for and accepting help—and dealing with unpleasant news appropriately, whether we’re delivering it or receiving it.

There have been several pleasures in working through this exercise—most notably the transpection with my colleagues on Twitter and in Skype. But there’s been another: rediscovering and re-reading the Declaration of Interdependence and George Orwell’s Politics and the English Language.

Note that I’m saying that these ideas would make the conversation more meaningful to me. You may have other thoughts, and I’d like to hear them, so I hope you’re willing to share them.

Exegesis Saves! (Part 2) Transpection with James Bach

Friday, January 21st, 2011

Last evening, after a long session of collecting and organizing a large number of contributed responses to yesterday’s testing challenge, I was going over my own perspectives on the sentence

“In successful agile development teams, every team member takes responsibility for quality.”

James Bach appeared on Skype, and we began an impromptu transpection session. It went more or less like this:

James: I saw your original challenge and a couple of replies. Are you familiar with the “tragedy of the commons”? That’s what the sentence reminds me of. However, I bet there is something real behind that statement; something good. I have been on management teams where I felt we were all responsible for quality, but not for all quality equally. We each had our jobs to do.

Anyway, I think “tedious” is a good descriptor. The sentence is a bland truism, like “honesty is good”. You can re-render it like this: “On a successful agile project, you won’t find anyone irresponsible about quality.” Oh THANKS for that insight.

Michael: I’ve got a bunch of problems with it. Is it a descriptive definition or normative definition? What is being distinguished here? The successful vs. the unsuccessful? The agile vs. the non-agile? Every team member vs. most team members vs. no team member? The responsible vs. the irresponsible? Does quality here mean critical thinking, or writing acceptance tests, or the absence of bugs, or making the customers happy? What’s the scope for quality here? Could you ever get anyone to admit seriously that they’re not responsible for the quality of their own work? It seems to me that, with those things all in play, the sentence doesn’t do any useful work. “On a successful agile team, every team member is responsible for apple pie.”

James: You could contrast it with waterfall: On a successful waterfall project, only some people need to be responsible for quality. That makes waterfall seem better than agile, acutally; less risk.

Michael: For somebody!

James: Or you could contrast it with the default: Under normal circumstances, people don’t take responsibility for quality (really?), unlike successful agile projects, where everyone does. Or you could take it to logical conclusion: if two people on an agile project disagree about quality, they must work through the diagreement or else one of them must leave the project (really?). Which means: Successful agile projects consist only of people who are in complete agreement about quality.

What does “responsibility for quality” actually mean in that context? Does it mean “making the product better?” Or does it mean “accepting how the product is?” Maybe the intention is: developers have to fix any bug that anyone on the project wants them to fix.

Michael: Yes. All kinds of questions are being begged, it seems to me. Now, to be fair, I haven’t yet provided a link to the article in which this exact sentence appeared (I’ll talk about that in my own reply), but when I look up keywords from the sentence on Google, I find a ton of articles that beg the quality and responsibility questions. It seems to me that the intention is usually this: “On an Agile team, everyone is responsible for preventing bugs.” That’s a nice idea, but even that statement would have serious problems. Or maybe the intention is “On an Agile team, testers get to go to the meetings too.” To some people, it means “on an Agile team, the programmers actually test their own code.” In fact, I remember in the early years of the Agile movement, a few programmers would tell me that they’d never worked with testers who had delivered any real value to the project. When I considered some of the places I had visited and testers I had met, I was dismayed to find that claim credible. In such contexts, really conscientious programmers would tend to be extra diligent about testing their own work, and would tend to assert that they and they alone were responsible for its quality. That’s my understanding of part of the genesis of XP—programmers asserting that with respect to coding bugs at least, the buck stopped with them. That seems like it could be a positive subtext.

James: I think I see this subtext, too: “On an agile project, there is no social structure or order. We are a perfect Einstein-Bose condensate of spontaneous harmony. Everyone does everything. Communication is instant. Agreement is assured. All skills are held in common. All experiences are equivalent.”

Which in practice means: The Strong Rule The Weak.

Michael: Ouch.

James: The sub-sub text is then this: “Don’t ask about social order. We don’t like to talk about that.”

Michael: “…but we do have this sentence that you’re not really supposed to question. It’s a form of Word Magic, so please don’t mess with it.”

James: How about this, then: “On an agile project, one thing that is beyond question is that everyone is resposible for quality. Also, unicorns are pretty.”

Still more to come. Stay tuned.

Doing Development Work vs. Doing Quality Assurance

Saturday, June 5th, 2010

Here’s a case where a comment and question were worthy of a post of their own.  In reference to my recent post, Testers:  Get Out of the Quality Assurance Business, Selim Mia writes:

Hi Michael,

I have started following your blog just from past few days and I like to thank you for all of your thoughtful posts by which reflects your craftsmanship.

Thank you for reading, and thank you for thanking me.

I have solely agreed all of your points/advice/discussions on this post. I had many confusion about the term QA and QC since the start of my testing career and still have many confusion, i think other testers have the same. i have been working in a department called “QA” in my organization but doing mostly testing tasks as like other companies in Bangladesh. But along with testing we have also doing some of the QA tasks (i think) and below i have mentioned some of these:

  • Check-in Review: we check, each developer at-least once in a day Check-in their source code into the svn repository (source code management system) with the comment what changes he made for this particular check-in and also reviewer name who pair reviewed the code before check-in.
  • Code review: we check, is the code reviewed by the technology expert in witch technology project is developing in the regular interval (at least for the new developer’s code, code of complex functionalities, etc) and also we ensure that actions has been taken for all the review comments.
  • Audit Process Framework: we check, are all the development processes are following by the all project members except their have enough justification and approval not to follow the particular process(es).
  • Audit Bug repository: we ensure all the reported bugs have been taken into action (not a bug, assigned, WIP, fix, won’t fix).
  • Audit Document Management System: we ensure that all the updated version of all documents of the particular project are stored on the DMS.

Are not all above activities are part (of course, not all) of QA? Your kind words will be very much helpful to me.

– Selim

What a great question! Thank you for asking.

The overarching mission for a tester, in my view, is to be of service to the project. Now, that’s not only the case for testers; I think it’s the overarching mission of anyone, everyone, on the project. We’re all in service to our paramount clients—the product owners, the business owners, the gold owners and the goal donors (as some Agile wags have said)—but we’re also in service to each other. When we’re thinking that way, the testers help the programmers by testing the product using a different skill set and mind set from the programmers; the programmers help the testers by providing a more testable product (log files, scriptable interfaces, and so on). Testers may help programmers to pinpoint the circumstances in which a bug happens; programmers help testers by providing explanations, test programs, hints on what to test. Testers learn to program; programmers learn to test. We support each other and learn from each other.

The Agile people for years have been advocating the idea of the self-organizing team. I believe in that too. That means that, in principle, anyone on the team is empowered to do whatever work needs to be done. So if a programmer takes on the tasks of setting up and configuring test environments, or if the tester is recruited to review code or models or bugs—activities that help to assure quality as a part of collaborative process, I’d say that’s cool.

The audit stuff gives me pause. Auditing, in my view, is a kind of testing role: gathering information with the intention of informing a decision. Auditors don’t set policy or enforce rules; they provide information to management. In many process-model-obsessed organizations (here in the West, at least) the role has taken on a different slant: auditors are a kind of process police. In such organizations, people rearrange and reprioritize their work not to optimize its value, but to keep the auditors happy. This is a form of goal displacement. To me, the priority should be on providing service and value to our clients, including each other.

In my view, if auditors discover some deviation from a set policy or a process model, I’d argue that the first step is to question the reasons for the deviation. Maybe someone is being sloppy; maybe someone is cutting corners; maybe someone is adding risk. But maybe someone has discovered a faster, less expensive, more efficient, more informative, more productive way of handling a task. Models always leave out something. Process models often leave out means by which we can encourage beneficial variation and change. I’ve never heard of an auditor reporting on some fabulous new problem-solving approach that someone has discovered internally. Most often, in my experience, process models leave out adaptability and people, as this remarkable TED talk describes.

It’s neither a tester’s job nor an auditor’s job, in my view, to set or enforce policy, and I think it’s politically dangerous for us to be perceived that way. As soon as we are perceived to be responsible for enforcement, we run the risk of being seen as tattletales, busybodies, quality police. In that kind of environment, information will soon start to be hidden, which undermines the task of investigating the product and identifying problems with it that threaten its value.

So, to the extent that you’re doing development work that helps to assure quality; to the extent that your teammates themselves are asking you to assist them; to the extent that you’re providing a service to them; to the extent that they appreciate what you’re doing as a service to them; and to the extent that they thank you for it, I’d say “rock on”, and congratulations.

In another forum, a correspondent suggested “Maybe it’s all down to the “overall” thing – be part of the process, not a megalomaniac who thinks he owns it.” I absolutely agree with that.  To the extent that you’re doing “quality assurance”; to the extent that your managers are requiring you to impose on your teammates (or even worse, to the extent that you’re imposing without being asked by anyone); to the extent that you’re slowing down the project or inflicting help; to the extent that the programmers see your work as enforcing the contents of a process model or policy document; to the extent that you are barely tolerated or outright resented—well, as always, that’s up to you and your organization. But it’s not the kind of work that I would condone or accept myself.

Again, thanks for writing.

When Testers Are Asked For A Ship/No-Ship Opinion

Wednesday, May 5th, 2010

In response to my post, Testers:  Get Out of the Quality Assurance Business, my colleague Adam White writes,

I want ask for your experience when you’ve your first 3 points for managers:

  • Provide them with the information they need to make informed decisions, and then let them make the decisions.
  • Remain fully aware that they’re making business decisions, not just technical ones.
  • Know that the product doesn’t necessarily have to hew to your standard of quality.

In my experience I took this to an extreme. I got to the point where I wouldn’t give my opinion on whether or not we should ship the product because I clearly didn’t have all the information to make a business decision like this.

I don’t think that’s taking things to extreme.  I think that’s professional and responsible behaviour for a tester.

Back in the 90s, I was a program manager for several important mass-market software products. As I said in the original post, I believe that I was pretty good at it. Several times, I tried to quit, and the company kept asking me to take on the role again.  As a program manager, I would not have asked you for your opinion on a shipping decision.  If you had merely told me, “You have to ship this product!” or “You can’t ship this product!” I would have thanked you for your opinion, and probed into why you thought that way.  I would have also counselled you to frame your concerns in terms of threats to the value of the product, rather than telling me what I should or should not do.  I likely would have explained the business reasons for my decisions; I liked to keep people on the team informed.  But had you continued in telling me what decision I “should” be making, and had you done it stridently enough, I would have stopped thanking you, and I would have insisted to you (and, if you persisted, your manager) to stop telling me how to do my job.

On the other hand, I would have asked you constantly for solid technical information about the product.  I would have expected it as your primary deliverable, I would have paid very close attention to it, and would have weighed it very seriously in my decisions.

This issue is related to what Cem Kaner calls “bug advocacy“. “Bug advocacy” is an easy label to misinterpret. The idea is not that you’re advocating in favour of bugs, of course, but I don’t believe that you’re advocating management to do something specific with respect to a particular bug, either.  Instead, you’re “selling” bugs. You’re an advocate in the sense that you’re trying to show each bug as the most important bug it can be, presenting it in terms of its clearest manifestation, its richest story, its greatest possible impact, its maximum threat to the value of the product. Like a good salesman, you’re trying to sell the customer on all of the dimensions and features of the bug and trying to overcome all of the possible objections to the sale. A sale, in this case, results in a decision by management to address the bug. But (and this is a key point) like a good salesman, you’re not there to tell the customer that he must buy what you’re selling; you’re there to provide your your customers with information about the product that allows them to make the best choice for them. And (corresponding key point) a responsible customer doesn’t let a salesman make his decisions for him.

I started to get pushback from my boss and others in development that it’s my job to give this opinion.

“Please, Tester! I’m scared to commit! Will you please do my job for me? At least part of it?”

What do you think? Should testers give their opinion on whether or not to ship the product or should they take the approach of presenting “just the facts ma’am”?

I remember being in something like your position when I was at Quarterdeck in the late 1990s. The decision to ship a certain product lay with the product manager, which at the time was a marketing function. She was young, and ambitious, but (justifiably) nervous about whether to ship, so she asked the rest of us what she should do. I insisted that it was her decision; that, since she was the owner of the product, it was up to her to make the decision. The testers, also being asked to provide their opinion, also declined. Then she wanted to put it to a vote of the development team. We declined to vote; businesses aren’t democracies. Anticipating the way James Bach would answer the question (a couple of years before I met him), I answered more or less with “I’m in the technical domain. Making this kind of business decision is not a service that I offer”.

But it’s not that we were obstacles to the product, and it’s not that we were unhelpful. Here are the services that we did offer:

  • We told her about what we considered the most serious problems in the product.
  • We made back-of-the-envelope calculations of technical support burdens for those problems (and were specific about the uncertainty of those calculations).
  • We contextualized those problems in terms of the corresponding benefits of the product.
  • We answered any questions for which we had reliable information.
  • We provided information about known features and limitations of competitive products.
  • We offered rapid help in answering questions about the known unknowns.

Yet we insisted, respectfully, that the decision was hers to make.  And in the end, she made it.

There are other potentially appropriate answers to the question, “Do you think we should ship this product?”

  • “I don’t know,” is a very good one, since it’s inarguable; you don’t know, and they can’t tell you that you do.
  • “Would anything I said change your decision?” is another, since if they’re sure of their decision, their answer would be the appropriate one: No. If the answer is Yes, then return to “I don’t know.”
  • “What if I said Yes?” immediately followed by “What if I said No?” might kick-start some thinking about business business based alternatives.
  • “Would I get a product owner’s position and salary if I gave you an answer?”, said with a smile and a wink might work in some rare, low-pressure cases although, typically, the product owner will have lost his sense of humour by the time you’re being asked for your ship/no-ship opinion.

As I said in the previous post, managers (and in particular, the product owners) are the brains of the project. Testers are sense organs, eyes for the project team. In dieting, car purchases, or romance, we know what happens when we let our eyes, instead of our brains, make decisions for us.

Testers: Get Out of the Quality Assurance Business

Monday, May 3rd, 2010

The other day on Twitter, Cory Foy tweeted a challenge: “Having a QA department is a sign of incompetency in your Development department. Discuss.”

Here’s what I think: I’m a tester, and it’s time for our craft to grow up. Whatever the organizational structure of our development shops, it’s time for us testers to get out of the Quality Assurance business.

In the fall of 2008, I was at the Amplifying Your Effectiveness conference (AYE), assisting Fiona Charles and Jerry Weinberg with a session called “Testing Lies”. Jerry was sitting at the front of the room, and as people were milling in and getting settled, I heard him in the middle of a chat with a couple of people sitting close to him. “You’re in quality assurance?” he asked. Yes, came the answer. “So, are you allowed to change the source code for the programs you test?” No, definitely not. “That’s interesting. Then how can you assure quality?”

A good question, and one that immediately reminded me of a conversation more than ten years earlier. In the fall of 1996, Cem Kaner presented his Black Box Software Testing course at Quarterdeck. I was a program manager at the time, but the head of Quality Assurance (that is, testing) had invited me to attend the class. As part of it, Cem led a discussion as to whether the testing group should really be called “Quality Assurance”.

Cem’s stance was that individuals—programmers and testers alike—could certainly assure the quality of their own work, but that testers could not assure the quality of the work of others, and shouldn’t try it. The quality assurance role in the company, Cem said, lay with the management and the CEO (the principal quality officer in the company), since it was they—and certainly not the testers—who had the authority to make decisions about quality.

Over the years he has continued to develop this idea, principally in versions of his presentations and papers on The Ongoing Revolution in Software Testing, but the concept suffuses all of his work. The role for us is not quality assurance; we don’t have control over the schedule, the budget, programmer staffing, product scope, the development model, customer relationships, and so forth. And that’s okay; that’s management work, and that’s management’s job.

When we’re doing our best testing work, we’re providing valuable, timely information about the actual state of the product and the project. We don’t own quality; we’re helping the people who are responsible for quality and the things that influence it. “Quality assistance; that’s what we do.”

More recently, in an interview with Roy Osherove, James Bach also notes that we testers are not gatekeepers of quality. We don’t have responsibility for quality any more than anyone else; everyone on the development team has that responsibility.

When he first became a test manager at Apple Computer way back when, James was energized by the idea that he was the quality gatekeeper. “I came to realize later that this was terribly insulting to everyone else on the team, because the subtle message to everyone else on the team is ‘You guys don’t really care, do you? You don’t care like I do. I’m a tester, that means I care.’ Except the developers are the people who create quality; they make the quality happen. Without the developers, nothing would be there; you’d have zero quality. So it’s quite insulting, and when you insult them like that, they don’t want to work with you.”

Last week, I attended the STAR East conference in Orlando. Many times, I was approached by testers and test managers who asked for my help. They wanted me to advise them on how they could get programmers to adopt “best practices”. They wanted to know how they could influence the managers to do a better job. They wanted to know how to impose one kind of process model or another on the development team. In one session, testers bemoaned the quality of the requirements that they were receiving from the business people (moreover, repeating a common mistake, the testers said “requirements” when they meant requirement documents). In response, one fellow declared that when he got back to work, the testers were going to take over the job of writing the requirements.

In answer to the request for advice, the most helpful reply I can give, by far, is this: these are not the businesses that skilled testers are in.

We are not the brains of the project. That is to say, we don’t control it. Our role is to provide ESP—not extra-sensory perception, but extra sensory perception. We’re extra eyes, ears, fingertips, noses, and taste buds for the programmers and the managers. We’re extensions of their senses. At our best, we’re like extremely sensitive and well-calibrated instruments—microscopes, telescopes, super-sensitive microphones, vernier calipers, mass spectrometers. Bomb-sniffing detectors. (The idea that we are the test instruments comes to me from Cem Kaner.) We help the programmers and the managers to see and hear and otherwise sense things that, in the limited time available to them, and in the mindset that they need to do their work, they might not be able to sense on their own.

It’s not our job to take over anything, like writing the requirements documents. It’s absolutely fine for us to do any kind of work that we’re asked to do. When we take on such work, though, it’s important to be aware and to remind others that our testing work is not getting done.

Listen: if you really want to improve the quality of the code and think that you can, become a programmer. I’ve done that. I can assure you that if you do it too, you’ll quickly find out how challenging and humbling a job it is to be a truly excellent programmer—because like all tools, the computer extends your incompetence as quickly and powerfully as it extends your competence.

If you want to manage the project, become a project manager. I’ve done that too, and I was pretty good at it. But try it, and you’ll soon discover that quality is value to some person or persons who matter, and that your own standards of quality are far less important than those who actually use the product and pay the bills. Become a project manager, and you’ll almost immediately realize that the decision to release a product is informed by technical issues but is ultimately a business decision, balancing the value of the product and bugs—threats to the value of the product—against the costs of not releasing it.

In neither of those two roles would I have found it helpful for someone who was neither a programmer nor a project manager to whine to me that I didn’t have respect for quality; worse still would have been instruction on how to do my job from people who had never done that kind of work. In both of those roles, what I wanted from testers was information.

As a programmer, I wanted to know about problems the testers had found in my code, how they had found them. I wanted their help in identifying risk, and the ways in which the product might not work. I appreciated hearing about the clues that they noticed, and the steps I could take to point me towards finding problems myself.

As a program manager, I wanted to know what information the testers had gathered about the product. I wanted to know how they had configured, operated, observed, and evaluated the product to get that information. If our product were suddenly working differently from the ways in which it had always worked, I wanted to know about that. I wanted to know about inconsistencies within the product. The testers were able to tell me how our product worked in comparison to other products on the market; how it was worse, how it was better. I wanted to know how the product stacked up against claims that we were making for it.

So: you want to have an influence on quality, and on the team. You do that by helping the people who build the product. Want to know how help the programmers in a positive way?

  • Tell the programmers, as James suggests in the interview, that your principal goal is to help them look good—and then start believing it. Your job is not to shame, or to blame, or to be evil. I don’t think we should even joke about it, because it isn’t funny.
  • You’re often the bearer of bad news. Recognize that, and deliver it with compassion and humility.
  • You might be wrong too.  Be skeptical about your own conclusions.
  • Focus on exploring, discovering, investigating, and learning about the product, rather than on confirming things that we already know about it.
  • Report what you’ve learned about the product in a way that identifies its value and threats to that value.
  • Try to understand how the product works on all of the levels you can comprehend, from the highest to the lowest.  Appreciate that it’s complex, so that when you have some harebrained idea about how simple it is to fix a problem, or to find all the bugs in the code, you can take the opportunity to pause and reflect.
  • Express sincere interest in what programmers do, and learn to code if that suits you. At least, learn a little something about how code works and what it does.
  • Don’t ever tell programmers how they should be doing their work. If you actually believe that that’s your role, try a reality check: How do you like it when they do that to you?

Want to know how to help the managers?

  • Provide them with the information they need to make informed decisions, and then let them make the decisions.
  • Remain fully aware that they’re making business decisions, not just technical ones.
  • Know that the product doesn’t necessarily have to hew to your standard of quality.
  • It’s not the development manager’s job, nor anyone else’s, to make you happy. It might be part of their job to help you to be more productive.  Help them understand how to do that.  Particularly draw attention to the fact that…
  • Issues that slow down testing are terribly important, because they allow bugs the opportunity to hide for longer and deeper. So report not only bugs in the product, but issues that slow down testing.
  • If you want to provide information to improve the development process, report on how you’re actually spending your time.
  • Note, as is so commonly the case, why testing is taking so long—how little time you’re spending on actually testing the product, and how much time you’re spending on bug investigation and reporting, setup, meetings, administrivia, and other interruptions to obtaining test coverage.
  • Focus on making these reports accurate (say to the nearest five or ten per cent) rather than precise, because most problems that we have in software development can be seen and solved with first-order measures rather than six-decimal analyses derived from models that are invalid anyway.
  • Show the managers that the majority of problems that you find aren’t exposed by constant, rote repetition of test cases, but by actions and observations that the test cases don’t cover—that is, by your sapient investigation of the product.
  • Help managers and programmers alike to recognize that tools can be used for more, far more, than programming a machine to pound keys.
  • Help everyone to understand that tools extend some kinds of testing and greatly limits others.
  • Help to keep people aware of the difference between testing and checking.  Help them to recognize the value of each, and that excellent checking requires a great deal of testing skill.
  • When you’re talking about your work, emphasise that your job is skilled exploration of the product, and help managers to realize that that’s how we really find problems.
  • Help the team to avoid the trap of thinking of software development as a linear process rather than an organic one.
  • Help managers and programmers to avoid confusing the “testing phase” with what it really is: the fixing phase.
  • When asked to “sign off” on the product, politely offer to report on the testing you’ve done, but leave approval to those whose approval really matters: the product owners.

Want to earn the respect of your team?

  • Be a service to the project, not an obstacle. You’re a provider of information, not a process enforcer.
  • Stop trying to “own” quality, and take as your default assumption that everyone else on the project is at least as concerned about quality as you are.
  • Recognize that your role is to think critically—to help to prevent programmers and managers from being fooled—and that that starts with not being fooled yourself.
  • Diversify your skills, your team, and your tactics. As Karl Weick says, “If you want to understand something complicated, you have to complicate yourself.”
  • As James Bach says, invent testing for yourself. That is, don’t stop at accepting what other people say (including me); field-test it against your own experience, knowledge, and skills. Check in with your community, and see what they’re doing.
  • If there’s something that you can learn that will help to sharpen your knowledge or your senses or your capacity to test, learn it.
  • Study the skills of testing, particularly your critical thinking skills, but also work on systems thinking, scientific thinking, the social life of information, human-computer interaction, data representation, programming, math, measurement
  • Practice the skills that you need and that you’ve read about. Sharpen your skills by joining the Weekend Testing movement.
  • Get a skilled mentor to help you. If you can’t find one locally, the Internet will provide. Ask for help!
  • Don’t bend to pressure to become a commodity. Certification means that you’re spending money to become indistinguishable from a hundred thousand other people. Make your own mark.
  • Be aware that process models are just that—models—and that all process models—particularly those involving human activities like software development—leave out volumes of detail on what’s really going on.
  • Focus on developing your individual skill set and mindset, so that you can be adaptable to any process model that comes along.
  • Share your copy of Lessons Learned in Software Testing and Perfect Software and Other Illusions About Testing.

Ultimately, if you’re a tester, get out of the quality assurance business. Be the best tester you can possibly be: a skilled investigator, rather than a programmer or project manager wannabe.

Note: every link above points to something that I’ve found to be valuable in developing the thinking and practice of testing. Some I wrote; most I didn’t.  Feast your mind!