DevelopsenseLogo

I’ve Had It With Defects

The longer I stay in the testing business and reflect on the matter, the more I believe the concept of “defects” to be unclear and unhelpful.

A program may have a coding error that is clearly inconsistent with the program’s specification, whereupon I might claim that I’ve found a defect. The other day, an automatic product update failed in the middle of the process, rendering the product unusable. Apparently a defect. Yet let’s look at some other scenarios.

  • I perform a bunch of testing without seeing anything that looks like a bug, but upon reviewing the code, I see that it’s so confusing and unmaintainable in its current state that future changes will be risky. Have I found a defect? And how many have I found?
  • I observe that a program seems to be perfectly coded, but to a terrible specification. Is the product defective?
  • A program may be perfectly coded to a wonderfully written specification— even though the writer of the specification may have done a great job at specifying implementation for a set of poorly conceived requirements. Should I call the product defective?
  • Our development project is nearing release, but I discover a competitive product with this totally compelling feature that makes our product look like an also-ran. Is our product defective?
  • Half the users I interview say that our product should behave this way, saying that it’s ugly and should be easier to learn; the other half say it should behave that way, pointing out that looks don’t matter, and once you’ve used the product for a while, you can use it quickly and efficiently. Have I identified a defect?
  • The product doesn’t produce a log file. If there were a log file, my testing might be faster, easier, or more reliable. If the product is less testable than it could be, is it defective?
  • I notice that the Web service that supports our chain of pizza stores slows down noticeably dinner time, when more people are logging in to order. I see a risk that if business gets much better, the site may bog down sufficiently that we may lose some customers. But at the moment, everything is working within the parameters. Is this a defect? If it’s not a defect now, will it magically change to a defect later?

On top of all this, the construct “defect” is at the centre of a bunch of unhelpful ideas about how to measure the quality of software or of testing: “defect count”; “defect detection rate”; “defect removal efficiency”. But what is a defect? If you visit LinkedIn, you can often read some school-marmish clucking about defects. People who talk about defects seem refer to things that are absolutely and indisputably wrong with the product. Yet in my experience, matters are rarely so clear. If it’s not clear what is and is not a defect, then counting them makes no sense.

That’s why, as a tester, I find it much more helpful to think in terms of problems. A problem is “a difference between what is perceived and what is desired” or “an undesirable situation that is significant to and maybe solvable by some agent, though probably with some difficulty”. (I’ve written more about that here.) A problem is not something that exists in the software as such; a problem is relative, a relationship between the software and some person(s). A problem may take the form of a bug—something that threatens the value of the product—or an issue—something that threatens the value of the testing, or of the project, or of the business.

As a tester, I do not break the software. As a reminder of my actual role, I often use a joke that I heard attributed to Alan Jorgenson, but which may well have originated with my colleague James Bach: “I didn’t break the software; it was broken when I got it.” That is, rather than breaking the software, I find out how and where it’s broken. But even that doesn’t feel quite right. I often find that I can’t describe the product as “broken” per se; yet the relationship between the product and some person might be broken. I identify and illuminate problematic relationships by using and describing oracles, the means by which we recognize problems as we’re testing.

Oracles are not perfect and testers are not judges, so to me it would seem presumptuous of me to label something a defect. As James points out, “If I tell my wife that she has a defect, that is not likely to go over well. But I might safely say that she is doing something that bugs me.” Or as Cem Kaner has suggested, shipping a product with known defects means shipping “defective software”, which could have contractual or other legal implications (see here and here, for examples).

On the one hand, I find that “searching for defects” seems too narrow, too absolute, too presumptuous, and politically risky for me. On the other, if you look at the list above, all those things that were questionable as defects could be described more easily and less controversially as problems that potentially threaten the value of the product. So “looking for problems” provides me with wider scope, recognizes ambiguity, encourages epistemic humility, and acknowledges subjectivity. That in turn means that I have to up my game, using many different ways to model the product, considering lots of different quality criteria, and looking not only for functional problems but anything that might cause loss, harm, or annoyance to people who matter.

Moreover, rejecting the concept of defects ought to help discourage us from counting them. Given the open-ended and uncertain nature of “problem”, the idea of counting problems would sound silly to most people—but we can talk about problems. That would be a good first step towards solving them—addressing some part of the difference between what is perceived and what is desired by some person or persons who matter.

That’s why I prefer looking for problems—and those are my problems with “defects”.

21 replies to “I’ve Had It With Defects”

  1. Nice piece. In my company I refer to ‘Issues’ i.e. someone has an issue with the solution. It’s not until we have discussed and triaged the ‘issue’ that we decide if it is indeed;
    – a defect,
    – more than one defect,
    – a training gap (e.g. the documentation or class room training either didn’t cover it or ‘misled’ the user)
    – Wrong expectation. (e.g. We thought it would do that. But we never told anyone that’s what we thought, we assumed) That can then become invalid, or a change request.
    – Misuse (e.g. not as per instructions, probably a usability issue, but may still be in spec), can be a defect of a change request
    – Change Request. (e.g. the one were one person thinks it should be done this way, or another rival product does something else/better)

    Issues have a short life, before they are reclassified. Defect don’t count in my world. Issues do, and that is probably still not right. But management want counts. I’ve still got to re-educate them on that.

    Reply
  2. As a hobbyist wordsmith, I agree: “Defect” is a terrible word to describe the “problems” we find.

    Actually, I’ll change that a bit: “Defect” and “problem” are misleading words that do not accurately describe the “information” we provide.

    As testers, we provide information. How that information is taken is not up to us. Some people at some times may interpret the information as a “problem”. Other people at other times may not.

    I can imagine 2 people reading my test report. The first cries, “The information you provided points out so many problems!”, while the second exclaims, “The information you provided contains no problems, at all!”

    Also, I think it is important to note that, testers don’t just report “bad news”. We should do more than simply report “differences between what is perceived and what is desired” and “undesirable situations…”. Sometimes, even if we exhaust all our oracles, etc. we may not “identify and illuminate [any] problematic relationships”, at all. We should report that, too!

    Reply
  3. Hi Michael, You raise some good point here; this is why I prefer the term bug to defect. I explain to people that a bug is something that would “bug” someone (who matters), pretty sure I’ve para phrased this from RST?

    Michael replies: That’s what we say in the class.

    One of the problems I have experienced with people counting defects is they can’t seem to ignore them once they have been raised, sometimes the tester raises something they believe to be a problem i.e. a relationship between their beliefs and the application but the same thing isn’t a problem for the people who matter. I believe that this is in part due the counting aspect of defects – once something has been counted they feel they have to justify it no longer being counted.

    I’m also worried about the opposite effect: prior restraint. That is, because the formal process puts a big admission charge on having to justify recording or reporting a bug, or rescinding a report, people refrain from reporting at all.

    On the flipside I’ve experienced a different phenomenon where someone “managing” a project is being judged (or perceives they are being judged) on the number of defects being raised and this either results in defects being rejected because they don’t feel they are important or multiple defects being logged for the same fundamental problem because they “need” to justify their existence.

    Or, as I said, others not reporting. Classic cases of measurement inducing distortion and dysfunction. At that point, engineering becomes a competitive game, and management is reduced to scorekeeping.

    So I’m up for not counting defects too!

    You could count them like this.

    Thanks for writing.

    Reply
  4. I’ve always hated “defect”, partly for what I perceive to be its nasty, disapproving aura, but also for its poor fit in developing software.

    It comes, I think, from (building) construction where a defect is a clear violation against the contractual specifications. You built the staircase required in the specs, but built it wrong. In contrast, if you failed to build the staircase at all, that’s a “deficiency”. Like many concepts and terms adopted from other disciplines, “defect” is an uncomfortable fit for software and we cause more problems than we solve by squeezing it on.

    I have used “problems” for many years as a collective term for “things that don’t seem right and that we ought to investigate”. Often, after investigation, a problem is redefined as one or more bugs (that we might or might not fix) — but not always. It could be an opportunity for us all to learn more about what was actually wanted/needed in a feature.

    OTOH, I work on projects at other people’s regular workplaces. It can be very difficult to change the cultural mindsets that local terminology represents. I plug away at it, but the struggle doesn’t always add value to the project or organization. Sometimes the practical tester/test manager has to talk the local dialect to get the job she was hired to do done honorably and move on.

    If the culture really wants to enforce particular language I’m willing to go along. There are usually lots of options. A lot of the time, even though there is a vocabulary in place, the language isn’t strictly enforced anyway. If there’s room to move and I think it will help, I might try testing for objections by speaking my own way. If there are disagreements, maybe a conversation will ensue, which would be fine. In the end, though, another McLuhan saying: “I don’t want them to agree with me; I just want them to think.” Nor must I use my own language when I’m a guest at the table, and not the host.

    I do sometimes use bug counts as a planning heuristic. If the project has determined that we have 200 must-fix bugs, and we currently have 2 programmers to fix them, a history of fixing 7 bugs/programmer/week, and a projected release date of 2 weeks from now, there’s an obvious indicator that something doesn’t add up in our planning. (Apart from the possibility that we will find more must-fix bugs.) No, all bugs are not created equal and don’t take equal effort to fix, but sometimes showing the numbers is what it takes to get senior management to consider the issues that seem obvious without the numbers to those of us closer to the action.

    I like the approach to numbering systems expressed here.

    Thank you for adding to the discussion.

    Reply
  5. Obsessively counting our problems could be a pathological rumination if it weren’t for the purpose of prioritizing our work in the service of searching for solutions. Its use as a planning heuristic relies on our full understanding and acceptance of the subjective nature of our problems, their possible solutions, and any number of wild cards that might be loosely generalized as marketing and legal.

    What is important here is transparency. It is perfectly fine to start assigning numbers to scale and scope and severity and others in order to start to make sense of what we plan on doing. We get into trouble if we forget where these numbers come from.

    I’d offer that trouble starts as soon as people assign numbers without considering the models, theories, functions, and operations by which we apply them. That’s why I believe this paper, “Software Engineering Metrics: What Do They Measure and How Do We Know” is so important. I’d like to believe that people are thoughtful, explicit, and clear when they describe qualitative attributes by quantitative means. I fear that doesn’t happen too often, and too often, that doesn’t happen.

    We need to retain the freedom to reassess our priorities based on new information we didn’t have before. We also need the courage to exercise this freedom.

    Yes—although as Laurent notes in his reply either above or below: who’s “we”?

    Thanks for the comment.

    Reply
  6. What Fiona says re “cultural mindsets”.

    Every so often I point people to an article I wrote back in 2006 which I feel, with a measure of pride, is still relevant and true today: http://www.ayeconference.com/entomology/

    The main difference the intervening years have made is that I have more stories of that kind to tell now.

    In the article I said “I find the word vague and almost entirely without merit in problem-solving.” I said this of the word “bug”, and I think it applies just as well to “defect” in the sense you, Michael, attack it here.

    I’ve varied on this over they years. I used to prefer “defect” to bug, in my younger and more vulnerable years, when I fell in with a kind of quality-police school of thinking; “defect” sounded more formal and official. Or officious, as I later came to believer. “Bug” at least, for me, has a useful connection to the idea of something that bugs someone.

    To me the trap is not so much the particular word we are currently fetishizing, so much as the act of fetishizing, by which I mean acting as if the use of a word, in and of itself, constituted professional, competent behavior.

    The truth is that professional, competent behavior *must* include a healthy amount of critical reflection on the meaning of words we use every day; of asking: “You keep using this word; does it really mean what you think it means? Does it mean the same thing to you as it does to me? What should we both take it to mean, if we want to achieve results we both care about? For that matter, *what* results do we both care about?”

    “Problem” can lead into exactly the same confusion, I’m afraid, when people use it uncritically.

    I believe that the risk with “problem” is lower, but I agree that any word—indeed, any mediu—holds that possibility. As I’m fond of pointing out, McLuhan noted that media extend whatever it is that we are.

    As a case in point, a team I’m currently working with has an “official” count of defects, that it reports explicitly at its all-hands meeting each iteration.

    This number, it turns out, is obtained by counting the number of items listed as “open defects” in Quality Center, the tool the team uses. This is the local definition of “how many defects we have”, so much so that for any other problem to be counted as a “defect”, you must convince a tester with access to the tool to open a new item.

    Is the inverse true too? That is, after a tester (a tester?! Since when did testers get the power to decide what constitutes a defect?) has counted something as a defect, is the tester able to reverse his or her decision and declare something not a defect? (a tester?! Since when did testers get the power to decide what doesn’t constitute a defect?)

    I have often struggled to do so: for instance, the same group has a separate, developer-owned tool which tracks *every* occurrence of a program error that results in a user experiencing an error message (and usually terminating a business flow, which has tangible bottom line effects). Yet these errors are only rarely entered into the official defect list – they literally “don’t count” unless you can convince a tester that they should. Thus is the notion of “defect” socially constructed out of relationships of power and influence within the group, rather than an objectively measurable property of the software.

    When it comes to bugs, defects, issues, concerns, trouble reports, and—of course—problems: are any of these things objectively measurable as such, without social construction? Objectivity is only ever achieved by deciding that certain objections and certain objectors don’t matter.

    Is there a problem there? I think there is. But it stems not from the word “defect” itself, so much as from the fetishizing of a single definition, and from the appropriation of that definition by a single subgroup.

    I don’t blame a certain arrangement of letters for problems. Yet I do agree with Orwell: “A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible.”

    In any case, words do have colours and shades of meaning, and “defect” represents a unpleasant kind of purple to me. I’m going for something more organic, more earth toned.

    In Steve’s account above, what I find interesting is the phrase “until we have discussed and triaged…” and in particular the word “we”. I don’t really care whether this group uses the word “issue” or “defect” or “bug” or “problem”, what I’d be curious to know is which “we” gets to define the words, and who among that “we” looks out for the user, who tries to make the entire team look good, who tries to make their own group or themselves look good, and so on.

    Everyone, I’d hope. I’d be interested too.

    Thanks for the comment.

    Reply
  7. When I worked as a system test engineer for a Networking Company in Denmark we used a “homemade” bug tracking system. In this system every problem was registrered as a TO = Test Observation.
    I liked this definition as it was my job to collect test observations and not to be the judge. We discussed these test observations with marketing, developers, management, etc. in order to decide whether to solve the problem or to rewrite the manual.

    Michael replies: I like this story…

    – and yes, management loved to count test observations

    …even though the ending is bittersweet.

    Thanks for writing.

    Reply
  8. I have steered my language towards “problem” for the last years, but still like to use “bug” for clear issues.

    But also “problem” sounds a bit strong sometimes, so I use “potential problem” quite often, for cases where I can’t really say if it is a problem or not (a conversation is needed.)
    “Potential problem” is a bit long to say, so I actually experimented with abbreviation “pproblem”. You might think the stuttering will sound ridiculous, and yes, it is ridiculous and can’t be used…

    So my current solution is to use many and different words:
    * “bug” for clear-cut product drawbacks.
    * “problem” for bigger issues
    * “potential problems” for things I believe might be problems or bugs
    * “observation” for noteworthy things I want to ask someone about

    And even better is to describe your actual observations, e.g. “when I tested the XYZ I see signs of performance degradations I’d like to show you”.

    Michael replies: This reminds me that I still have to write a blog on Safety Language.

    My experience with managers who like to count “the thingies” is that when it boils down to important decisions, it is the discussion about specific bugs/problems/defects that really matters.

    Mine too. And I like to get to that discussion as quickly as possible.

    Reply
  9. no matter how you call it, until everyone in the team share your understanding
    no matter how you call it at all if everyone in the team don’t understand what you mean. calling defect problem in that case wouldn’t change anything

    all in all, no matter how you call it

    Reply
  10. Incident.

    It’s an Incident, something unexpected, until such time as it has been through triage and analysis.

    I blame HP for retaining the terminology when they bought Test Director.

    Reply
  11. In my early years as tester, we used ‘variance’ instead of bug. Then I moved into another company as QA and ‘defect’ was widely used. It never really sat well with me. To me, ‘variance’ is so much more accurate to define an issue to be evaluated. If it’s a clear bug or defect I’d assign it to the dev triage, if it’s an annoyance or if it looks like a bad requirement I’d assign it to the BA triage.

    I always tried to get ‘variance’ adopted but I always had the same reaction: “Variance is too vague. You can’t count them properly. It’s like counting apples with oranges”. My point exactly!!! But clients like ‘defect’…

    And for the counting people, to me it’s a bit like hockey. If your goalie has too many goals against, well you have a goalie problem, but if the shots on goal are through the roof, you don’t have a goalie problem, well you might but you certainly need to work on your defence…

    My point is you have to keep track of the ratio on how much open ‘variances’ you have over a specific test set before you start getting worried about the quality of your software…

    Michael replies: You had me right until the end, there.

    What does the ratio have to do with it? If someone has problems with your product has problems, will they feel better just because some proportion of tests passed?

    Reply
  12. “A problem may take the form of a bug — something that threatens the value of the product — or an issue — something that threatens the value of the testing, or of the project, or of the business”

    Does this observation mean that you do/could distinguish the way you think of these things during testing from the way you report them? If so, can’t you still encounter the counting trap for the latter?

    Michael replies: I don’t understand the first question. I have an answer for the second one though: people may do all kinds of harebrained things no matter how you try to steer away from them. One motivation for using different terminology is to encourage the “huh?” moment. People do seem to have a fuzzy quantitative concept of “bugs”. I can’t prove it, but it seems to me that “we have 16 problems” or “we have 23 issues” might cause a momentary hiccupp, just long enough to trigger a discussion about what “problems” or “issues” are. If I want to find out, I’ll have to experiment.

    I do like “problem” or “potential problem” for motivating the former; the latter might depend on who I’m talking to in what context etc.

    I was wondering about this kind of terminology recently and reviewed a bunch of terms for bugs, errors, issues, defects, faults, failures and so on in http://qahiccupps.blogspot.co.uk/2013/09/errors-by-any-other-name.html

    Thanks for the comment, and the link.

    Reply
  13. Defects can sometimes be interpreted solely as per what your agreement is with the teams (dev, test, product owners, business stakeholders etc)…

    In some projects I have worked in so far, we used to sometimes also raise an observation as an issue in QC rather than a defect which works well for devs as well!

    Michael replies: I like the idea of testers raising observations of possible bugs as “issues” (while trying our best to show how they could represent a serious problem) and the letting the responsible parties decide how they want to frame it.

    Remember: anyone is entitled to use whatever terms they like. I’m speaking for me, here.

    Reply
  14. Gday Michael, how you keeping? Do you think it really matters what we call them as long as we all know what we’re referring to? Everywhere I go calls them something different whether it be defect, issue, incident, problem, fault, failure yet we all know what they are and what we have to do about them. Wadyathink?

    Michael replies: In this post (as in all my posts), I’m talking about the way I think and feel about things. Language is something that develops within a culture. I’m not in every culture, nor should I expect every situation to follow my cultural norms.

    Words are tools. People can use their tools in whatever way they like, for their own purposes. As McLuhan said, “We shape our tools, and thereafter they shape us.” As long as I can remember, I’ve been interested in the effects that our choices of labels have on how people think about the things being labelled.

    For example, when you say “we all know what they are and what we have to do about them”, I’m interested in who you’re talking about when you say “we”, “all”, “know”, “have to”. Does “we” include the testers? The development team? The managers? The clients? The end users? “All”? Every member of every group I’ve mentioned above? “Know”? Is that based on a deep understanding or a shallow one? How do we know we have shared understanding of what we know? Do people coming from different development, social, linguistic, or national cultures have different interpretations of those things? If people differ in their knowledge, what are the implications?

    We talk about all this stuff (and we have lawyers) because we don’t all know what things mean. It’s clearly safe for people to have fuzzy shared models of things most of the time. When might it be dangerous for those models to be imprecise?

    Reply
  15. Agree, up to a point. In my experience, the culture of a project/programme dictates more how we handle than procedure. Its fine to have procedures based on terminologies unless it causes negative productivity impact however I can count the number of times developers have spent hours & hours investigating a “variance” or “issue” only to find that the root cause is not a bonafide “defect” yet only a few moments to find & fix a real one. We still have to expend the time to resolve & where needed remedy, no matter what we call them. And if a project doesn’t have a culture then create one, that’s what leaders do.

    Reply
  16. I have a straightforward approach to the three terms:

    1. Issue – someone says he has an issue with something
    2. Defect – apparently the issue is really a defect in something
    3. Bug – something in the code is broken

    I don’t see these as straightforward. First, I see a problem with (1): it’s not very helpful to define a word in terms of itself. (2) suffers from the same problem, and adds another: by what means do you determine that something “really is” a defect? (3) presents another problem: you have not showed how it is materially different from (2), or even (1). When something in the code is broken, is that not “really a defect in something”?

    This all becomes important when you realize that “bug”, “issue”, and even “defect” are subject to The Relative Rule: for any abstract X, X is X to some person, at some time.

    All the examples mentioned above, could be defects of different things, there’s a multi-level multi-origin hierarchy in product development, and things can break left and right.

    I’m not sure I understand this sentence.

    “Upon reviewing the code, I see that it’s so confusing and unmaintainable” – It’s a defect if you claim to produce maintainable code. If not, it’s just an issue you have but noone else.

    No one?

    I see two things in play here. One is for whom the confusion and unmaintainability is a problem; another is whether that confusion or unmaintainability of the code is a threat to the project or the business. I might be confused by some code that I’m looking at, and that’s certainly a problem for me, one that potentially threatens the quality of my testing work. In that case I’ll have to talk it over with my client. That problem can be resolved in a number of ways—with an explanation; by refactoring the code; by adding some clarifying comments; by removing the code in question; by someone thanking me for the observation, but deciding that the quality of the code isn’t my concern. The decision of whether it’s an issue for the business is not really mine to make, nor is it my decision as to what to do about it. I don’t agree that “defect” is a helpful way to think about things in this context.

    “I observe that a program seems to be perfectly coded, but to a terrible specification. Is the product defective?” – Yes, it’s a defect in specification in the product domain.

    “specifying implementation for a set of poorly conceived requirements.” – It’s also a defect in product domain, but at an earlier stage – not specification but in gathering requirement.

    “Our development project is nearing release, but I discover a competitive product with this totally compelling feature that makes our product look like an also-ran.” – This issue is only a defect, if your process is designed to be one step ahead of your competitors. It’s a high-level defect in the product domain.

    “Half the users I interview say that our product should behave this way, saying that it’s ugly and should be easier to learn; the other half say it should behave that way” – This is either a defect of specifying the target group (do you want to target group A or B or both?) or a defect of user experience design. If your target group is that heterogenous, you have to find a solution that fits for both groups.

    “If the product is less testable than it could be, is it defective?” – Is the product supposed to be testable? Than yes, it’s a defect. If not, it’s your personal issue.
    “the site may bog down sufficiently that we may lose some customers. But at the moment, everything is working within the parameters.” – Look up your product values for the term ‘scalability’ or ‘performance’. If it’s in there, it’s a defect, otherwise, no one cares but you 🙂

    In each one of these cases, you seem to be applying some unstated decision mechanism deciding whether something is a defect or not; a binary decision. I think situations are more nuanced than that, and that deciding on the nature of a problem is a complex social judgement, as Harry Collins would say. There are usually several ways to address a problem, and those ways are optional, not controlled by whether we label a given problem a “defect” or not. I would say that judging something to be a “defect” isn’t doing any useful work—unless you are, for example, keeping score.

    The real question this article ask is, what is the scope of testing. The more steps you take back, the more things you will find. Are you supposed to find them? Ask your boss. I think you should, look at as much as you can and your expertise permit, and keep on reporting the issues you have as defects.

    Why not report them as issues, and leave out the middleman?

    The other question is, do you need to file them in some defect tracking tool. I think for most defects outside of the low level development domain (graphics, wordings, code, data, etc.) you can find better platforms to raise them. Like the next time you talk to your colleague, or that meeting you have tomorrow, or lunch with your boss.

    Sure. You can do that with all problems, whatever you call them.

    People might not always like this broad perspective on testing, but if people expect high quality products, they have to learn that all of this is important and necessary much more important than finding edge-case bugs. And yeah, counting is ridiculous, but some people are number-driven so just throw random numbers at them.

    That sounds ill-advised and unethical to me. There’s no reason for a competent professional to bullshit people. As soon as they discover you’ve been throwing random numbers at them, they have every reason to believe that you would throw random facts at them.

    But a defect is a defect, it won’t go away by counting or ignoring – just like a full trashcan doesn’t magically get empty. Regarding any defect you find: Dig in, try to solve it, raise awareness, discuss, think about it, get more info of its context, …when people see that you care about something, they’ll start to care about it as well. And some defects might be fixed for everybody’s gain 🙂

    Raising awareness is a fine thing to do, as is citing oracles—why something could reasonably described as an undesirable inconsistency. But when you say “a defect is a defect”, you’re talking as though there were some agency that has final, godlike authority over what is a defect and what isn’t. That’s not the case. For every instance in which you’ve said something is a defect above, I can easily disagree. Who says it’s a defect? Me, or you? A whole host of “defects” can get dismissed by a product owner saying that none of them is a problem, and that you’re ill-informed about the business need for the product. Moreover, testers can’t solve defects in the way that you’re describing them. And when you say, “when people see that you care about something, they’ll start to care about it as well”… or they might dismiss you as a crank. If you were to say, “if you can demonstrate the business risk of a problem or the business value of some approach to solving it”, you’d be on firmer ground.

    Reply
  17. Hello Michael. Just got to read this blog post. Couple of points where I need your thoughts on please:

    1. Are defects then not a subset of problems? Among your list there are other subsets such as risks, scope for future backlog etc. Am I right in saying that?

    Michael replies: You can say whatever you like. I don’t say “defect”, myself, for reasons outlined in the post.

    In our way of looking at things, a problem is a relationship between some person and some thing or situation, such that the thing or situation is undesirable to that person; a difference between what is perceived and what is desired. Risk refers to a problem that hasn’t manifested yet. If you have bad feelings about that risk, the risk itself is a problem too. If you believe that scope for future backlog is undesirable to you, that suggests a problem. http://www.developsense.com/blog/2012/04/problems-with-problems/

    2. On considering defect count for test metrics. I found it useful to evaluate the defect count found on production when compared to what we found during our testing phase on one of the projects, to show the team that we are not finding defects where it mattered. An analysis on these threw light on multiple inherent problems the team had. Which included some of them such as not having a correct process for test data management, test environments and release management. The defect count helped us show the product owner the business value in spending effort and money on fixing these problems before moving ahead.

    Your thoughts please.

    I I’m willing to be that the count was at best trivially useful. “A bug” is not thing; it’s a relationship between the product and some person. Unless you know what the bugs are—and whole bunch of other stuff besides—the count is not the point. Is “we found two bugs” better or worse than “we found 200 bugs”? On the face of it, the latter looks better for the testers and worse for the product. But maybe those 200 bugs were unimportant, and the two were business destroyers. Maybe the two bugs were found while testers were learning the product, and the 200 were found after the testers had developed deep knowledge about the product. Maybe… http://www.developsense.com/articles/2007-11-WhatCounts.pdf https://www.satisfice.com/blog/archives/483

    What mattered, I’ll bet, was not not the count, but the list of problems, and the analysis that you did on it, and the things you did to address them.

    Reply

Leave a Comment