Blog Posts from January, 2011

You’ve Got Issues

Monday, January 31st, 2011

What’s our job as testers? Reporting bugs, right?

When I first started reading about Session-Based Test Management, I was intrigued by the session sheet. At the top, there’s a bunch of metadata about the session—the charter, the coverage areas, who did the testing, when they started, how long it took, and how much time was spent on testing versus interruptions to testing. Then there’s the meat of the session sheet, a more-or-less free-form section of test notes, which include activities, observations, questions, musings, ideas for new coverage, newly recognized risks, and so forth. Following that, there’s a list of bugs. The very last section, at the bottom of the sheet, sets out issues.

“What’s an issue?” I asked James. “That’s all the stuff that’s worth reporrting that isn’t a bug,” he replied. Hmmm. “For example,” he went on, “if you’re not sure that something is a bug, and you don’t want to commit it to the bug-tracking system as a bug, you can report it as an issue. Say you need more information to understand something better; that’s an issue. Or you realize that while you’ve been testing on one operating system, there might be other supported operating systems that you should be testing on. That’s an issue, too.”

That was good enough as far as it went, but I still didn’t quite get the idea in a comprehensive way. The information in the “test notes” section of session sheet is worth reporting too. What distinguishes issues from all that other stuff, such that “Issues” has its own section on parallel with “Bugs”?

Parallelism saves the day. In the Rapid Software Testing class, we teach that a bug is anything that threatens the value of the product. (Less formally, we also say that a bug is something that bugs somebody… who matters.) At one point, a definition came to me: if a bug is anything that threatens the value of the product, an issue is anything that threatens the value of our testing. In our usual way, I transpected on this with James, and we now say that an issue is anything that threatens the value of the project, and in particular the test effort. Less formally and more focused on testing, an issue is anything that slows testing down or makes it harder. If testing is about making invisible problems visible, then an issue is anything that gives problems more time or more opportunities to hide.

When believe that we we see a bug, it’s because there’s an oracle at work. Oracles—those principles or mechanisms by which we recognize a problem—are heuristic. A heuristic helps us to solve a problem or make a decision quickly and inexpensively, but heuristics aren’t guaranteed to work. As such, oracles are fallible. Sometimes it’s pretty clear to us that we’re seeing a bug in a product, the program seems to crash in the middle of doing something. Yet even that could be wrong; maybe something else running on the same machine crashed, and took our program down with it. A little investigation shows that the product crashes in the same place twice more. At that point, we should have no compunction reporting what we’ve seen as a bug in the product.

An issue may be clear, or it may be something more general and less specific. A few examples of issues:

  • As you’re testing, you see behaviour in the new version of the product that’s inconsistent with the old version. The Consistency with History oracle tells you that you might be seeing a problem here, yet one could make the case that either behaviour is reasonable. The specification that you’re working from is silent or ambiguous on the subject. So maybe you’ve got a bug, but for sure you have an issue.
  • While reviewing the architecture for the system, you realize that there’s a load balancer in the production environment, but not in the test environment. You’ve never heard anyone talk about that, and you’re not aware of any plans to set one up. Maybe it’s time to identify that as an issue.
  • You sit with a programmer for a few minutes while she sketches out the structure of a particular module to help identify test ideas. You copy the diagram, and take notes. At the end of the meeting, you ask her to look the diagram over, and she agrees that that’s exactly what she meant. You reflect on it for a while, and add some more test ideas. You take the diagram to another programmer, one who works for her, and he points at part of the diagram and says, “Wait a second—that’s not a persistent link; that’s stateless.” You’ve found disagreement between two people making a claim. Since the code hasn’t been built for that feature yet, you can’t log it as a bug in the product, but you can identify it as an issue.
  • As you’re testing the application, a message dialog appears. There’s no text in the dialog; just a red X. You dismiss the dialog, and everything seems fine. It seems not to happen again that day. The next day, it happens once more, in a different place. Try as you might, you can’t replicate it. Maybe you can report it as an intermittent bug, but you can definitely record it as an issue.
  • A steady pattern of broken builds means that you wait from 10:00am until the problem is fixed—typically at least an hour, and often three or four hours. Before you’re asked, “Why is testing taking so long?” or “Why didn’t you find that bug?” report an issue.
  • You’ve been testing a new feature, and there are lots of bugs. 80% of your session time is being spent on investigating the bugs and logging them. This has a big impact on your test coverage; you only got through a small subset of the test ideas that were suggested by the session’s charter. The bugs that you’ve logged are important and you can expect to be thanked for that, but you’re concerned that managers might not recognize the impact they’ve had on test coverage. Raise an issue.
  • You’re a tester in an outsourced test lab in India. Your manager, under a good deal of pressure himself, instructs you to run through the list of 200 test cases that has been provided to him by the clueless North American telecom company, and to get everything done within three days. With practically every test you perform, you see risk. All the tests pass, if you follow them to the letter, but the least little experimentation shows that the application shows frightening instability if you deviate from the test steps. Still your boss insists that your mission is to finish the test cases. He’s made it clear that, for the next three days, he doesn’t want to hear anything from you except the number of tests that you’ve run per day. Do your best to finish them on schedule, but sneak a moment here and there to identify risks (consider a Moleskine notebook or an ASCII text file). When you’re done, hand him your list of bugs—and in email, send him your list of issues.
  • You’re a tester in a small development shop that provides customizable software for big banks. You have concerns about security, and you quickly read up on the subject. What you read is enough to convince you that you’re not going to get up to speed soon enough to test effectively for security problems. That’s an issue.
  • As a new tester in a company, you’ve noticed that the team is organized such that small groups of people tend to work in their own little silos. You can point to a list of a dozen high-severity bugs that appear to have been the result of this lack of communication. You can see the cost of these twelve bugs and the risk that there are more lurking. You recognize that you’re not responsible for managing the project, yet it might be a good idea to raise the issue to those who do.

Those are just a few examples. I’m sure you can come up with many, many more without breaking a sweat.

Teams might handle issues in different ways. You might like to collect an issues list, and put on a Big Visible Chart somewhere. Someone might become responsible for collecting and managing the issues submitted on index cards. Some issues might end up as a separate category in the bug tracking system (but watch out for that; out of sight, out of mind). Still others might get put onto the project’s risk list.

Some issues might get handled by management action. Some issues might get addressed by a straightforward conversation just after tomorrow morning’s daily standup. Someone might take personal responsibility for sorting out the issue; other issues might require input and effort from several people. And, alas, some issues might linger and fester.

When issues linger, it’s important not to let them linger without them being noticed. After all, an issue may have a terrible bug hiding behind it, or it may slow you down just enough to prevent you from finding a problem as soon as you can. Issues don’t merely present risk; they have a nasty habit of amplifying risks that are already there.

So, as testers, it’s our responsibility to report bugs. Even more importantly, it’s our responsibility to raise awareness of risk, by reporting those things that delay or interfere with our capacity to find bugs as quickly as possible: issues.

Exegesis Saves (Part 3) Beyond the Bromides

Sunday, January 23rd, 2011

Over the last few blog posts, some colleagues and I have been analyzing this sentence:

“In successful agile development teams, every team member takes responsibility for quality.”

Now, in one sense, it’s unfair for me to pick on this sentence, because I’ve taken it out of context. It’s not unique, though; a quick search on Google reveals lots of similar sentences:

“Agile teams work in a more collaborative and open manner which reduces the need for documentation-heavy, bureaucratic approaches. The good news is that they have a greater focus on quality-oriented, disciplined, and value-adding techniques than traditionalists do. However, the challenge is that the separation of work isn’t there any more—everyone on the team is responsible for quality, not just ‘quality practitioners’. This requires you to be willing to work closely with other IT professionals, to transfer your skills and knowledge to them and to pick up new skills and knowledge from them.”
http://www.ambysoft.com/essays/agileTesting.html

“In Agile software development, the whole team is responsible for quality, but there are many barriers to accomplishing that goal.”
http://www.rallydev.com/downloads/document/191-the-best-kept-secret-of-agile-software-quality.html

“So what does testing now need to know and do to work effectively within a team to deliver a system using an agile method? The concept of ‘the team being responsible for quality’ i.e. ‘the whole team concept’ and not just the testing team, is a key value of agile methods. Agile methods need the development team writing Unit tests and/or following Test First Design (TDD) practices (don’t confuse TDD as a test activity as in fact it is a mechanism to help with designing the code). The goal here is to get as much feedback on code and build quality as early as possible.”
http://agiletesting.com.au/agile-methodology/agile-methods-and-software-testing/

“As we have seen quality is the responsibility of every team member in an agile team, not just the developers. Every team member has a role to play in building quality in.”
http://www.catosplace.net/blogs/personal/?p=580

“The responsibility for quality was shifted to the whole team. Each of the different roles is responsible for doing some form of testing. Programmers, architects, analysts, and even managers are all intimately involved with testing activities and work closely together to achieve quality goals.”
http://www.ciol.com/resources/UserFiles/developer/Effective-utilization-of-Agile-Methods-in-QA.doc

“In traditional systems, the responsibility for quality is mainly delegated to testing teams that must make sure the code is of high quality. Agile thinking makes quality a collective responsibility of the customer, the developers and the testers all the time from the first, to the last minute of a project. The customer is involved in quality by defining acceptance tests. The developers are involved in quality by helping the customers write the tests, by writing unit tests for all the production code they write and the testers are involved by helping the developers automate acceptance (customer) tests and by extending the suite of automated tests.”
http://danbunea.blogspot.com/2008/05/chapter-6-quality-and-testing.html

“I’m an advocate of any methodology that empowers engineers to work toward higher standards of software integrity. Agile and test-driven methodologies have found a way to do that by distributing ownership and responsibility for quality. And with so many ways to make static analysis boost the type of automated testing this requires, agile is a better formula for better code that can keep up with shorter scrum cycles and produce frequently “potentially shippable” products.”
http://blog.coverity.com/development/why-go-agile/

“This then leads back to my original question. Who is responsible for quality? The answer is everyone.”
http://www.basilv.com/psd/blog/2010/who-is-responsible-for-quality

“The corollary to this rule is that testers cannot be responsible for quality; developers must be. The Agile methods put the responsibility for quality precisely where it belongs, with the developers.”
http://www.cmcrossroads.com/cm-journal-articles/9688-build-quality-in-the-agile-methods-are-right

“In a successful team, every member feels responsible for the quality of the product. Responsibility for quality cannot be delegated from one team member to another team member or function. Similarly, every team member must be a customer advocate, considering the eventual usability of the product throughout its development cycle. Quality has to be built into the plans and schedules. Use bug allotments, iterations devoted to fixing bugs, to bring down bug debt. This will reduce velocity which may provide enough slack in future iterations to reduce bug rates.”
http://www.yoursharepointexperts.com/microsoft/Documents/MSF%20for%20Agile%20Overview.pdf

So the sentence that I quoted is by no means unique. http://www.agilejournal.com/articles/columns/column-articles/2722-whats-a-tester-without-a-qa-team“>Here’s the source for it. It’s an article by Lisa Crispin and Janet Gregory. You can read the rest of the article, and make your own decisions about whether the rest of it is helpful. You may decide (and I’d agree with you) that there are several worthy points in the article. You may decide that I’ve been unfair (and I’d agree with you) in excerpting the sentence without the rest of the article, especially considering that until now I’ve ignored the sentence that follows immediately, “This means that although testers bring special expertise and a unique viewpoint to the team, they are not the only ones responsible for testing or for the quality of the product.”

The latter sentence certainly clarifies the former. I would argue (and this is why I brought the whole matter up) that, in context, the second sentence renders the first unnecessary and even a distraction. Please note: my intention was not to critique the article, nor to launch a personal attack on Lisa and Janet. As I’ve said, I have no doubt that Lisa and Janet mean well, and their article contains several worthy points. My goal instead was to test this often-uttered tenet of agile lore. That’s why I didn’t reveal the source of the sentence initially; the article itself is not at issue here.

Yet the sentence, found in so many similar forms, is a bromide. My Oxford Dictionary of English says that a bromide is “a trite statement intended to soothe or placate”. As philosophers might say, the sentence doesn’t do any real explanatory work; it doesn’t make any useful distinctions. Even in context, it’s often unclear as to whether the sentence is intended be descriptive (“this is the way things are”) or normative (“this is the way things should be”).

To highlight my greatest misgivings about the slogan, let’s restate it, replacing the word “quality” with Jerry Weinberg’s definition of it:

“In successful agile development teams, every team member takes responsibility for value to some person(s).”

Every time I see the whole-team-responsible-for-quality trope, I find myself wondering responsibility to whom, quality according to whom, and what it means to take responsibility. Figuring out what we intend to achieve, and how we are to achieve it, are among the harder problems of software development. (One can see this being played out in the Craftsmanship Skirmishes that are going on as I write.)

Here’s what would make the conversation more meaningful to me. I suggest that we who develop and write about software…

  • Consider quality not as something simple, objective, and abstract, but as something messy, subjective and very human. Quality isn’t a thing, but rather a complex set of relationships between products, people, and systems. As such, quality shouldn’t be swept under the rug of a vague slogan.
  • Remember that there are as many ways to think about quality as there are people to think about them, and that in a software development project, people’s interests will clash from time to time. Those conflicts will require some people to concede certain values in favour of other values. As Jerry Weinberg has pointed out, decisions about quality are political and emotional. Sorting out those conflicts will require us to identify who has the power to make the decision, and to contribute to empowering that person when appropriate. That in turn will require us to develop skill, courage, and confidence, tempered with patience and humility.
  • Apropos of the last point, we must keep in mind that teams are collections—or systems—of individuals. it’s a nice idea to think that the whole team makes decisions collectively, but in reality some person—the product owner—must be granted the ultimate authority to make a decision. If the team is to be successful, there’s an implicit contract: the product owner must enable, trust and empower the members of the team to design and perform their work collaboratively, in the best ways that they know how; and the members of the team must inform, trust and empower the product owner, such that she can make the best decisions possible.
  • Since quality means “value to some person(s)”, be specific about what particular dimension of value we mean, and to whom. For example, in a particular context, if we want “quality” to mean “bug prevention,” let’s say precisely that. Then let’s recognize the ways in which certain approaches towards preventing bugs might represent a threat to someone’s current interests or to their personal safety rules. If, in a particular context, we want quality to mean “problems solved for customers”, let’s say precisely that. Then let’s recognize that there are many approaches to solving problems, and that some problems might be solved by writing less software, not more. If we want “quality” to mean “many features in a product”, let’s say precisely that. Then let’s recognize how “many features” can satisfy some people—while adding complexity, development time, and expense to a product, thereby confusing and annoying other people. In other words, let’s use the word “quality” in a more careful way, which starts with deciding whether we want to use the word at all.
  • Since quality is a complex notion, we must learn not only to declare quality in terms of things that we expect and prescribe. Those things are important, to be sure. Yet we must also prepare to seek the unexpected, to make discoveries, and to respond to change.
  • Rather than striving for influence (which to me has echoes of the quality assurance or quality control mindset), testers in particular must strive to develop understanding and awareness of the system that we’re being asked to test. To me, that’s our principal product: learning about the system and reporting what we’ve learned to help the project community to gain greater understanding of the system, the problems, and the risks.
  • Consider responsibility not in terms of something people take, but as something that people accept, and also as something that people grant, share, and extend to one another. To me, that means contributing to an environment in which everyone is empowered to create and defend the value of each other’s work. That includes offering, asking for and accepting help—and dealing with unpleasant news appropriately, whether we’re delivering it or receiving it.

There have been several pleasures in working through this exercise—most notably the transpection with my colleagues on Twitter and in Skype. But there’s been another: rediscovering and re-reading the Declaration of Interdependence and George Orwell’s Politics and the English Language.

Note that I’m saying that these ideas would make the conversation more meaningful to me. You may have other thoughts, and I’d like to hear them, so I hope you’re willing to share them.

Exegesis Saves! (Part 2) Transpection with James Bach

Friday, January 21st, 2011

Last evening, after a long session of collecting and organizing a large number of contributed responses to yesterday’s testing challenge, I was going over my own perspectives on the sentence

“In successful agile development teams, every team member takes responsibility for quality.”

James Bach appeared on Skype, and we began an impromptu transpection session. It went more or less like this:

James: I saw your original challenge and a couple of replies. Are you familiar with the “tragedy of the commons”? That’s what the sentence reminds me of. However, I bet there is something real behind that statement; something good. I have been on management teams where I felt we were all responsible for quality, but not for all quality equally. We each had our jobs to do.

Anyway, I think “tedious” is a good descriptor. The sentence is a bland truism, like “honesty is good”. You can re-render it like this: “On a successful agile project, you won’t find anyone irresponsible about quality.” Oh THANKS for that insight.

Michael: I’ve got a bunch of problems with it. Is it a descriptive definition or normative definition? What is being distinguished here? The successful vs. the unsuccessful? The agile vs. the non-agile? Every team member vs. most team members vs. no team member? The responsible vs. the irresponsible? Does quality here mean critical thinking, or writing acceptance tests, or the absence of bugs, or making the customers happy? What’s the scope for quality here? Could you ever get anyone to admit seriously that they’re not responsible for the quality of their own work? It seems to me that, with those things all in play, the sentence doesn’t do any useful work. “On a successful agile team, every team member is responsible for apple pie.”

James: You could contrast it with waterfall: On a successful waterfall project, only some people need to be responsible for quality. That makes waterfall seem better than agile, acutally; less risk.

Michael: For somebody!

James: Or you could contrast it with the default: Under normal circumstances, people don’t take responsibility for quality (really?), unlike successful agile projects, where everyone does. Or you could take it to logical conclusion: if two people on an agile project disagree about quality, they must work through the diagreement or else one of them must leave the project (really?). Which means: Successful agile projects consist only of people who are in complete agreement about quality.

What does “responsibility for quality” actually mean in that context? Does it mean “making the product better?” Or does it mean “accepting how the product is?” Maybe the intention is: developers have to fix any bug that anyone on the project wants them to fix.

Michael: Yes. All kinds of questions are being begged, it seems to me. Now, to be fair, I haven’t yet provided a link to the article in which this exact sentence appeared (I’ll talk about that in my own reply), but when I look up keywords from the sentence on Google, I find a ton of articles that beg the quality and responsibility questions. It seems to me that the intention is usually this: “On an Agile team, everyone is responsible for preventing bugs.” That’s a nice idea, but even that statement would have serious problems. Or maybe the intention is “On an Agile team, testers get to go to the meetings too.” To some people, it means “on an Agile team, the programmers actually test their own code.” In fact, I remember in the early years of the Agile movement, a few programmers would tell me that they’d never worked with testers who had delivered any real value to the project. When I considered some of the places I had visited and testers I had met, I was dismayed to find that claim credible. In such contexts, really conscientious programmers would tend to be extra diligent about testing their own work, and would tend to assert that they and they alone were responsible for its quality. That’s my understanding of part of the genesis of XP—programmers asserting that with respect to coding bugs at least, the buck stopped with them. That seems like it could be a positive subtext.

James: I think I see this subtext, too: “On an agile project, there is no social structure or order. We are a perfect Einstein-Bose condensate of spontaneous harmony. Everyone does everything. Communication is instant. Agreement is assured. All skills are held in common. All experiences are equivalent.”

Which in practice means: The Strong Rule The Weak.

Michael: Ouch.

James: The sub-sub text is then this: “Don’t ask about social order. We don’t like to talk about that.”

Michael: “…but we do have this sentence that you’re not really supposed to question. It’s a form of Word Magic, so please don’t mess with it.”

James: How about this, then: “On an agile project, one thing that is beyond question is that everyone is resposible for quality. Also, unicorns are pretty.”

Still more to come. Stay tuned.

Exegesis Saves! (Part 1)

Friday, January 21st, 2011

This morning, I read a sentence that bugged me.

“In successful agile development teams, every team member takes responsibility for quality.”

I’ve seen sentences of that general form plenty of times before. Whether I’ve reacted or not, they’ve always bugged me, and today I decided to probe into why.

Rather than doing so on my own, I thought it would be more fun and more interesting to involve my community, so I posted a challenge on Twitter. If I want to get any other work done, I’m going to have to learn to stop doing that. I posted:

“Thinking Thursday. Test this sentence: ‘In successful agile development teams, every team member takes responsibility for quality.’”

Although I have great respect for my colleagues, I hadn’t anticipated so many interesting replies. So, today, my summary of the responses on Twitter. Soon, a brief conversation between James Bach and me, and shortly thereafter, my own assessment.

Questioning the Mission

Adam Goucher was the first to reply. He responded to my tweet by noting, “I could also check it by putting into a wordprocessor and seeing what is underlined.” I read this, laughed, and replied, “You could. But I ask for testing. :)” Adam promptly replied that he had thus clarified the mission. Yes; absolutely. One tactic for refining your mission is to make an offer. Acceptance, rejection, or other reactions to the offer may be revealing.

Before testing the sentence, Shrini Kulkarni immediately (and wisely) questioned the testing assignment. “First question: Why are you asking this question, and how will you use my answer? Question #2: What is your motivation for asking this question and how will you evaluate my response?”
Those were splendid questions. I emphasize, especially for all those new to testing: if you feel uncertain or confused, it’s vitally important to question the mission to reduce the risk that your assumptions don’t align with your client’s assumptions.

I answered. “I’m exploring what bugs me (and others) about the sentence. I’m not sure how (or even if) I’m going to use the information just yet; that’s part of my exploration.” That’s the why, my motivation. In answer to the evaluation question, I listed intellectual or emotional stimulation; insight, epiphanies, and reminders of old epiphanies as factors. I didn’t add then (but I will now) that I didn’t think of it as a competition. (To that end, I’ve presented as many of the replies as I’ve been able, and although I wasn’t intending to evaluate them in a competitive way, I can’t help but be impressed.)

Shrini had a couple more questions. “What is your interpretation of ‘testing’ of a sentence like this?” The goal, he said, was to know how my notion of testing the sentence might be different from his. At the time, I was away from my computer, so I didn’t respond to Shrini right away. So Shrini did something else that an expert tester does; if you can’t get an answer, state your assumptions. “One way I can think of ‘testing’ a sentence is to check if it is true. So my question would be “how do you [missing word] it is true?” (I believe the missing word was a typo.) “Another interpretation of testing a sentence—a linguistic construct—is to check if it is formed as per the rules of grammar.” So here Shrini, in the face of an ambiguous assignment, gave two possible interpretations and got to work, focusing on the one that he figured was the deeper problem.

Martin Jansson also asked for clarification: “When asking me to test this, are you the only stakeholder?” I was away when that message came in too. In reply, I’ll say now that I was the only client, but in a way, lots of people might be stakeholders. I leave it as an exercise to figure out who those stakeholders might be.

Questioning the Context of the Sentence

In addition to questioning the mission, questioning the context is crucial. Martin Jansson had a bunch of context questions. “Since Twitter limits only to 140 chars, can I see the full length of what was originally intended? Is this sentence localized and is originally from another language? Can I see the original one? What other sentences are connected to the one we are testing? What will the system and its environment look like?” Griffin Jones asked, “What is the history of this sentence in this context? Is this sentence the image we want to project as a company or group? Do our peers/competition make a similar claim? Do we publicly claim to follow that sentence?”

Identifying the Problems

The very first reply that I received was from Florin Duriou, who responded simply and directly, “too generic”. As Lynn McKee said, “Many words in that sentence are subjective.” I agree with that, so let’s get specific. How?

Shrini offered two approaches. “I do little of Jerry Weinberg’s “Mary had a little lamb” exercise to see possible meanings of every word and their combinations.” That exercise, from Exploring Requirements, involves going through the simple sentence “Mary had a little lamb,” and emphasizing each word in turn to see what other interpretations might be lurking.

  • “MARY had a little lamb” (but Joe didn’t, so he was jealous)
  • “Mary HAD a little lamb” (but she doesn’t any more)
  • “Mary had A little lamb” (then she had two—and now she has a flock)
  • “Mary had a LITTLE lamb” (but too much lamb would have been bad for her diet)
  • “Mary had a little LAMB” (why did you think I said “ham”?)

Shrini’s second approach: “Testing a sentence, I look for ‘words’ in it—like successful, agile, development, teams, team, member, quality, responsibility.” By putting “words” in quotes, I think Shrini was emphasizing that every one of these ideas are concepts, constructs, thought-stuff. So let’s look at some of those words and constructs in more detail.

“Successful”

Success is context-dependent. Griffin, Lynn, Peter Walen, and Stephen Hill all questioned the meaning of “success”. Simon Morley wanted to know whether we might deem a team successful without knowing what the team was doing or producing-and whether its efforts were desired. Both Pete Houghton and Martin asked whether criteria for success included shipping on time. Martin also asked if quality was the only factor that makes an Agile team “successful”. “What if they hate each other and learn nothing from what they do?”

“Responsibility”, “Agile”, “Team”

What does “responsibility” mean? Áine McGovern held that programmers should take responsibility for testing at low levels before testers get their hands on the product; that everyone is are responsible for a product that works well. Yet responsibility might mean “blame”, as both Peter Houghton and Peter Walen observed. What does “agile” mean? As Martin joked, “Did we mean Agile or perhaps AGILE? ‘Cause being AGILE is so much better than just lowercase agile.” Does the Agile part even matter? Anand or Komal Ramdeo ( I don’t know which; they have a joint Twitter account) noted that “agile” could be removed from the sentence without loss of meaning, if we believe that teams are successful when every team member takes responsibility. Pete Houghton thought to question what we mean by “team”. Griffin even questioned the role of “in” in the sentence.

Words in Combination

Joining all these words into a sentence leads to a kind of combinatorial explosion of possible interpretations. Lynn noted that success might not equal quality, depending on the project or organizational goals. I agree—and the project and organizational goals might come into conflict from time to time too, as the organization might have many projects on the go, and they projects might compete for resources.

Simon asked what “taking responsibility for quality” might involve or exclude. Peter Walen asked if we could infer that in non-agile teams, no team members are responsible for quality—or if successful waterfall teams didn’t need to worry about quality. Tim Western and had another variation: unsuccessful agile development teams don’t take responsibility for quality? Michel Kraaij asked pointedly, “So if the quality sucks, no one takes responsibility?” Adam Brown had yet another variation: wouldn’t every team member take responsibility for quality even if the project was unsuccessful?

Our sentence might be subject to the graveyard fallacy, a central theme of The Black Swan. The successful often attribute their success to some factor or practice or approach; the unsuccessful who used the same factor, practice, or approach but who were less lucky don’t survive to tell of their experience with the factor—and people seem disinclined to listen to the unsuccessful.  What can you learn from losers, after all? Nassim Nicholas Taleb, in The Black Swan, retells a story from Marcus Tullis Cicero, a tale of skeptical inquiry. “One Diagoras, a nonbeliever in the gods, was shown painted tablets bearing the portraits of some worshipers who prayed to survive the subsequent shipwreck. The implication was that praying protects you from drowning. Diagoras asked, ‘Where were the pictures of those who prayed, then drowned?’ The drowned worshippers, being dead, would have a lot of trouble advertising their experiences from the bottom of the sea. This can fool the casual observer into believing in miracles.” (You can keep reading here.) That’s what I thought of when I read Peter Houghton’s reply: “What about the unsuccessful teams where everyone also takes responsibility?”

“Quality”

Griffin asked for the definition of quality in this context. Markus Deibel asked if the notion of quality was based on each team member’s definition of quality, or on a common set of quality rules. Good questions; as Anand (or Komal) Ramdeo put it, “‘quality’ might have different meaning to different people—so everyone is responsible for quality but the product can still suck.” Quite right, I would argue—with the additional observation that the product might suck to some people and not to others.

Adam Goucher suggested replacing “quality” in the troublesome sentence with “the product”, as customers likely care more about that then the “quality”. I think Adam’s suggestion is to focus on something more concrete (the product), and less abstract. A good idea, but I think it shifts the problem rather than confronting it. Whatever people have to say about the product, their evaluation is an expression of a quality judgement. Martin suggested that in a successful agile development team, everyone would know that quality is abstract—and that its meaning is therefore not to be taken for granted. I agree (yet everyone? would?), and I’d be specific about this: quality isn’t an intrinsic and objective attribute of something. Instead, it’s a relationship between something and some person.

More Probing Questions

Griffin asked “compared to what?” A little while ago I jokingly suggested to Markus Gärtner that he write a macro that would at a keystroke issue the question “compared to what?” Zeger Van Hese reported that I had triggered his macro successfully. “Compared to what? Successful to whom? Quality to whom? Every time? Really?”

Zeger and Martin both raised interesting questions about equality and responsibility. Does every member of the agile team define quality equally? Can all team members ever be equally responsible for quality? Is it even possible to take responsibility for quality in the team? (For my part, I’ll speak to this issue later.) Zeger even raised what I will now call The Animal Farm Take on Responsibility: “I can’t help but thinking ‘…but some take more responsibility than others,’” he wrote.

The explicit talk of equality and implicit reference to power reminded me of Jerry Weinberg’s observation that decisions about quality are always political, made by people who have the authority—the political power—to make them.

Deciding What and How to Observe

Griffin emphasized a question that we must ask in addition to “compared to what?” He wanted to know “according to whom”. That’s important because observations, comparisons, and measurements are never value-neutral; they always depend upon at least one person, and what and how each person observes. “What would I see/hear/feel when a ‘team member’ ‘takes responsibility’?” Simon adds, “What would a team member /not/ ‘taking responsibility for quality’ look like?” Anand (or Komal) asks, “Is there any scale to measure responsibility? How do you compare more or less responsible?”

No Problem

Some people found no problem at all with the sentence. Alan Cooper retweeted it, adding “True”. Bill Clark responded to that, saying, “If the team looks good, everyone on it looks good. A bunch of lone coyotes running here and there not so much.” Ron Jeffries said, “The sentence is perfect. By the way, what did you want it to do?” An important question, but I might have placed it before my evaluation of the sentence’s perfection.

All of these responses were interesting and valuable. The most gratifying one, though, came from Zeger Van Hese, who raced to his blog before I could get to mine. You can read his response here.

More to come!

Jerry Weinberg Interview (from 2008)

Wednesday, January 19th, 2011

In the spring of 2008, I was privileged to chat with Jerry Weinberg on why he was favouring CAST with his only conference appearance of that year, other than the Amplifying Your Effectiveness conference, of which he’s a co-founder and host. CAST that year saw the launch of Jerry’s book Perfect Software and Other Illusions About Testing. It’s now available as an e-book, too.

Jerry will not, so far as I know, be at CAST 2011. Nonetheless, his advice about going to conferences where smart people hang out remains sound.

Michael: You’ve been involved with computers for 50 years, and with giving people advice for almost that long. What do you suggest my first question should be, and how would you answer it?

Jerry: Ask me why I chose this conference as my one of the year. And other things.

Michael: Sounds good. So: why did you choose this conference as your one of the year?

Jerry: Errors have been the principal issue in computing right from the beginning, as John von Neumann pointed out even before I got into the field (and that’s really a long time ago). I wrote about testing as the opening topic in my first book, “Computer Programming Fundamentals” way back in 1960—and way back then, I already took flack from some reviewers who didn’t think errors was a suitable topic for politically correct people. You’d think I had written about human excrement.

And you’d also think that as our field matured, we would have outgrown that prudish attitude about error—but we haven’t. Back then, we had no professional testers. Testing was every developer’s job (though they weren’t called “developers” back then, or even “programmers”). We fought hard to have testing recognized as a profession of its own, and though we have people called “testers” today, we still have the prudes. In many organizations, testers are, sadly, considered lower-class citizens.

Testing holds a special place in my vision of the future of the computing profession as a whole. Why? Because testing is the first place where we generally get an independent and realistic view of what we are doing right and what we are doing wrong when we build new systems. We do get this view from Support (another area that’s considered low-class), but by the time information arrives from Support, the people who put the errors in a product are often long gone and immune to learning from their mistakes.

Quite simply, if we don’t learn to learn from our mistakes, we won’t improve as a profession. And if we don’t improve, we limit whatever good this amazing new (still) technology offers to humanity.

That’s why I’ve made the status of testing and testers my first priority for some years, and why I’m debuting my book on testing fallacies and myths (Perfect Software, and Other Illusions About Testing) at CAST, the one conference that I feel is a creation of testers, by testers, and for testers.

Michael: Recently you launched a new Web site, and your banner is “Helping smart people to be happy.” Why did you choose that?

Jerry: Most of the people in the computing professions are pretty smart, at least as measured by tests and the kind of technical work they accomplish. But so many of them haven’t learned how to use their smarts on themselves. They can create wonderful systems, but when they use their brains to think about themselves, they often think themselves into depression.

I was like that, for a long time, until I began to figure out what I was doing to myself. I set myself the task of learning how to be happy, and as I began to succeed, I realized that one of the things that makes me happy is working with other happy people. So, selfishly, I decided I would devote myself to helping my colleagues and students learn to share my happiness. Like most things I do, it’s completely selfish—but has side effects that others may enjoy.

Michael: Why not “Helping happy people be smart?”

Jerry: If you’re happy, you don’t need to be smart. Smart isn’t the only road to happiness. It’s not that I mind helping people be smart, or smarter, but it’s just not my primary goal. Nevertheless, I guess there are thousands of people out there who would say I’ve helped them grow smarter in some way. I think that’s true of you, Michael, at least from what you tell me. I hope I’ve helped you be happier, too.

Michael: Happier for sure, and smarter I hope. I’ve learned about both from conversations that I’ve had with you and other smart people. I remember once that Joshua Kerievsky asked you about why and how you tested in the old days—and I remember you telling Josh that you were compelled to test because the equipment was so unreliable. Computers don’t break down as they used to, so what’s the motivation for unit testing and test-first programming today?

Jerry: We didn’t call those things by those names back then, but if you look at my first book (Computer Programming Fundamentals, Leeds & Weinberg, first edition 1961 —MB) and many others since, you’ll see that was always the way we thought was the only logical way to do things. I learned it from Bernie Dimsdale, who learned it from von Neumann.

When I started in computing, I had nobody to teach me programming, so I read the manuals and taught myself. I thought I was pretty good, then I ran into Bernie (in 1957), who showed me how the really smart people did things. My ego was a bit shocked at first, but then I figured out that if von Neumann did things this way, I should.

John von Neumann was a lot smarter than I’ll ever be, or than most people will ever be, but all that means is that we should learn from him. And that’s why I go to a select number of conferences, like CAST and AYE, because there are lots of smart people there to learn from. I recommend my tactic to any smart person who wants to be happy.

Jerry’s Web Site is at http://www.geraldmweinberg.com. Want to help to make at least one smart person happy? I’d recommend buying—and reading—one of his fiction books, and letting him know that you did.

Public Rapid Software Testing Classes, Calgary and Berlin

Friday, January 14th, 2011

I’m delighted to announce that I’ll be doing a one-hour lunchtime presentation on test framing for the Software Quality Discussion Group in Calgary, February 10, 2011. The session is free! However, space is limited to the first 50 people who register, so don’t delay.

I’ll be in Calgary to present a public offering of Rapid Software Testing class, February 7-9, 2011. It’s being organized by Nancy Kelln of Unimagined Testing and Lynn McKee of Quality Perspectives. . There are still a few days left to register; information is available here. Here’s what one of the participants said about the last Calgary class:

“The Rapid Software Testing course that I attended in June definitely surpassed my expectations. As an experienced tester I was becoming rather disillusioned of the so called ‘Tried and True’ methods of software testing and was really looking for something that resembled, well, reality. So now, instead of spending my time writing test cases, writing reports that no one reads and chasing down Product Specs and Feature Specs I test. What an amazing concept—have the tester do testing! It has freed up an enormous amount of time and, therefore, the testing is far more productive. Potential ‘show stoppers’ are being found earlier on in the test schedule and test coverage has also improved greatly. More time is spent on important matters and less time trying to hit a number target (i.e. 90% testing complete, 95% pass rate, etc.). The focus has returned to the task at hand (testing) and there is less focus on reporting (counting). That also allows for the tester to move into a more co-development role to assist with the implementation of new features and functionality. I’m happy, the devs are happy, the customer is happy and the boss is happy—Thanks Michael!”

Blair Burke, B.Sc., Software Tester, Print Audit, Inc.
September, 2010

Over on the other side of the Atlantic, Testing Experience presents another public offering of Rapid Software Testing in Berlin, March 16-18, 2011. Register for that class here.

Drop me a line if you’re in either town around those times, and let’s try to find time to meet!

When A Bug Isn’t Really Fixed

Tuesday, January 11th, 2011

On Monday, January 10, Ajay Balamurugadas tweeted, “When programmer has fixed a problem, he marks the prob as fixed. Programmer is often wrong. – #Testing computer software book Me: why?”

I intended to challenge Ajay, but I made a mistake, and sent the message out to a general audience:

“Challenge for you: think of at least ten reasons why the programmer might be wrong in marking a problem fixed. I’ll play too. #testing”

True to my word, I immediately started writing this list:

1. The problem is fixed on a narrow set of platforms, but not on all platforms. (Incomplete fix.)

2. In fixing the problem, the programmer introduced a new problem. (In this case, one could argue that the original problem has been fixed, but others could argue that there’s a higher-order problem here that encompasses both the first problem and the second.) (Unwanted side effects)

3. The tester might have provided an initial problem report that was insufficient to inform a good fix. Retesting reveals that the programmer, through no fault of his own, fixed the special case, rather than the general case. (Poor understanding, tester-inflicted in this case)

4. The problem is intermittent, and the programmer’s tests don’t reveal the problem every time. (Intermittent problem)

5. The programmer might have “fixed” the problem without testing the fix at all. Yes, this isn’t supposed to happen. Stuff happens sometimes. (Programmer oversight)

6. The programmer might be indifferent or malicious. (Avoidance, Indifference, or Pathological Problem)

7. The programmer might have marked the wrong bug fixed in the tracking system. (Tracking system problem)

8. One programmer might have fixed the problem, but another programmer (or even the same one) might have broken or backed out the fix since the problem was marked “fixed”. Version control goes a long way towards lessening this problem.  Everyone makes mistakes, and even people of good will can thwart version control sometimes.  (Version control problem)

9. The problem might be fixed on the programmer’s system, but not in any other environments. Inconsistent or missing .DLLs can be at the root of this problem. (Programmer environment not matched to test or production)

10. The programmer hasn’t seen the problem for a long time, and guesses hopefully that he might have fixed it. (Avoidance or indifference)

I had promised myself that I wouldn’t look at Twitter as I prepared the list. By the time I finished, though, the iPhone on my hip was buzzing with Twitter notifications to the point where I was getting a deep-tissue massage.

When I got up this morning, there were still more replies. Peter Haworth-Langford wanted “Do we need to focus on what’s wrong? Solutions? Are we missing something else by focusing on what’s wrong?” The last response on the thread, so far as I know, was Darren McMillan asking, rather like Peter, “I’d like to know which role you are playing when you say I’ll play too? Customer/PM….. Is there more to the challenge?” Nope. And thank goodness for that; I was swamped with replies. I decided to gather them and try grouping them to see if there were patterns that emerged.

Very few problems indeed have any single, unique, and exclusive cause. Moreover, this list is based on Joel’s Law of Leaky Abstractions: “All non-trivial abstractions are to some degree leaky.” Some of the problems listed below may fit into more than one category, and, for you, may fit better into a category to which I’ve arbitrarily assigned them. Cool; we think differently. Let me know how, and why.

Note also that we’re looking at this without prejudice. The object of this game is not to question anyone’s skill or integrity, but rather to try to imagine what could happen if all the stars are misaligned.  We’re not saying that these things always happen, or even that they frequently happen. Indeed, for a couple, of items, they might never have happened. In a brainstorm like this, even wildly improbable ideas are welcome, because they might trigger us to think of a more plausible risk.

Erroneous Fix

Just as people might fail to implement something the first time, they might fail when they try to fix the error, even though their acting in good faith with all of the skill they’ve got.

Lynn McKee: He is a human being and simply made an error in fixing it.
Darren McMillan: The developer didn’t have the skills to apply fix correctly, resulting code caused later regression issues.

That kind of problem gets compounded when we add one or more of the other problems listed below.

Incomplete Fix

Somtimes fixes are incomplete. Sometimes the special case has been fixed, but not the general case. Sometimes that’s because of a poor understanding of the scope of the problem; problems are usually multi-dimensional.  (We’ll get to the root of the misunderstanding later.)

Ben Kelly: ‘Fix’ hid erroneous behavior but did not resolve the underlying problem
Ben Simo: The problem existed in multiple places and requires additional fixes.
Lanette Creamer: Fix isn’t accesible
Lanette Creamer: Fix is not localized.
Lanette Creamer: Fix isn’t triggered in some paths.
Lanette Creamer: Fix doesn’t integrate w other code
Stephen Hill: The programmer might have fixed that symptom of the bug but not dealt with the root cause.
Stephen Hill: Has the fix been applied only to new installs or can it retrospectively fix pre-existing installs too?

Sometimes people will fix the letter of the problem without doing all of the related work.

Ben Kelly: Bug fix did not have accompanying automation checks added (in a culture where this is the norm)

It’s possible for people comply maliciously to the letter of the spec, fixing a coding problem while ignoring an underlying design problem.

Erkan Yilmaz: dev knows since decision abt design it’s bad but against his belief fixs also bug(s). He cant look honestly in mirror anymore

Unwanted Side Effects

Sometimes a good-faith attempt to fix the problem introduces a new problem, or helps to expose an old problem. Much of the time such problems could easily intersect with “Incomplete Fix” above or “Poor Understanding” below.

Ben Simo: The fix created a new problem worse than the solution.
Lanette Creamer: fix breaks some browsers/platforms
Lanette Creamer: fix has memory leaks
Lanette Creamer: fix breaks laws/reqirements
Lanette Creamer: fix slows performance to a crawl.
Michel Kraaij: The bug was fixed, but “spaghetti code” increased.
Michel Kraaij: The dev did fix the bug, but introduced a dozen other bugs. (this issue fixed or not?
Michel Kraaij: The fix increases the user’s manual process to an unacceptable level.
Nancy Kelln: Was never actually a problem. By applying a ‘fix’ they now broke it.
Pete Walen: Mis-read problem description, “fixed” something that was previously working.

Intermittent Problem

In my early days as telephone support person at Quarterdeck, customers used to ask me why we hadn’t fixed a particular problem. I observed that problems that happened in every case, on every platform, tended to get a lot of attention. Sometimes a fix will appear to solve a problem whose symptom is intermittent. The fix might apply to first iteration through a loop, but not subsequent iterations; or for all instances except the intended maximum. Problems may reproduce easily with certain data, and not so often or not at all with other data. Timing, network traffic, available resources can conspire to make a problem intermittent.

Michel Kraaij: The dev happened to use test data which didn’t make the bug occur.
Michel Kraaij: The dev followed “EVERY described step to reproduce the bug” and now the bug didn’t occur anymore.
Pete Walen: Fixed sql query with commands that don’t work on that DB… not that that ever happened to anything I tested…
Pradeep Soundararajan: might have thought the bug to be fixed by tryin to repeat test mentioned in the report although there are other ways to repro

Environment Issues

We’ve all heard (or said) “Well, it works on my machine.” The programmer’s environment many not match the test environments.

Adam Yuret: The fix only works on the Dev’s non-production analogous workstation/environment.
Michel Kraaij: The fix is based on module, which has became obsolete earlier, but wasn’t removed from the dev’s env.
Pradeep Soundararajan: Works on his machine
Stephen Hill: Might be fixed in dev’s environment where he has all the DLLs already in place but not on a clean m/c.

“Works on my machine” is a special case of a more general problem: the programmer’s environment might not be representative of the test environment, but the test environment might not be representative of the production environment, either. There might not be a test environment. Patches, different browser versions, different OS versions, different libraries, different mobile platforms… all those differences can make it appear that a problem has been fixed.

Darren McMillan: Fix on production code blocking customer upgrades
Lynn McKee: No two tests are ever exactly the same, so even tho code change was made something is diff in testers “environment”.
Michel Kraaij: The fix demands a very expensive hardware upgrade for the production environment.
Pete Walen: Fixed code for 1 DB, not for the other 3, and not the one that was used in testing.

Poor Understanding or Explanation

Arguably all problems include an element of this one. Sometimes there’s poor communication between the programmer and the tester, due to either or both. The tester may not have described or explained the problem well, and the programmer provided a perfect fix to the problem as (poorly) described. Sometimes the programmer doesn’t understand the problem or the implications of the fix, and provides an imperfect fix to a well-described problem. Sometimes a report might seem to refer to the same problem as another, when the report really refers to different problem. These problems can be aided or exacerbated by the medium of communication: the bug tracking system, linguistic or cultural differences, written instead of face-to-face communication (or the converse).

Ben Kelly: Programmer can’t reproduce the problem – tester didn’t provide sufficient info.

Ben Simo: The problem wasn’t understood well enough to be satisfactorily fixed.
Michel Kraaij: The dev asked whether “the problem was solved this way?” He got back a “yes”. He just happened to ask the wrong stakeholder. Nancy Kelln: Wasn’t clear what the problem was and they fixed something else.
Pradeep Soundararajan: Assumes it to be a duplicate of a bug he recently fixed.
Zeger Van Hese: Developer solved the wrong problem (talking to the tester would have helped).
Ben Simo: The tester was wrong and gave the programmer information leading to breaking, not fixing, the software.
Michel Kraaij: The tester classifies the bug as “incorrectly fixed”. However, it’s the tester who’s wrong. Bug IS fixed.

Zeger Van Hese: The dev doesn’t see a problem and marks it fixed (as in: functions as designed)

Another variation on poor understanding is that the “problem” might not be a problem at all.

Darren McMillan: It wasn’t a problem after all. Customer actually considered the problem a feature. On hearing about the fix customer cried.
Darren McMillan: Problem came from a tester with a tendency to create his own problems. Wasn’t actually a problem worth fixing.

Finally in this category, a “problem” can be defined as “a difference between things as perceived and things as desired” (that’s from Exploring Requirements Weinberg and Gause). To that, I would add the refinement suggested by the Relative Rule “…to some person, at some time.” A bug is not a thing in the program; it’s a relationship between the product and some person. One way to address a problem is to solve it technically, of course. But there other ways to address the problem: change the perception (without changing the fact of the matter); change the desire; decide to ignore the person with the problem; or wait, such that perhaps the problem, the perception, the desire, or the person are no longer relevant.

Darren McMillan: Fix was a customer support call. Fix satisfied customer, didn’t fit product needs.
Pradeep Soundararajan: Because the bug is the perception not the code

Insufficient Programmer Testing

Partial problem fixes, intermittent problems, and poor understanding tend not to thrive in the face of programmers who think and act critically. Inadequate programmer testing is never the sole source of a problem, but it can contribute to a problem being marked “fixed when it really isn’t.

Ben Simo: The fix wasn’t sufficiently tested.
Darren McMillan: Obvious: wasn’t properly tested, didn’t consult required parties for fix,
Lynn McKee: …and therefore must not have done any or /any/ effective testing on his end first.

Version Control Problems

Version control software was relatively new to most on the PC platform in the middle 1990s. These days, it’s implemented far more commonly, which is almost entirely to the good. Yet no tool can guarantee perfect execution for either individuals or teams, and accidents still happen. Alas, version control can’t guarantee that the customer has the same configuration on which you’re developing or testing—or that he had yesterday, or that he’ll have tomorrow.

George Dinwiddie: Forgot to check in some of the new code.
Nancy Kelln: Bad code merge overwrote the changes. Bug fix got lost.
Pete Walen: Fixed code, did not check it in, fixed it again, check in BOTH (contradictory fixes)
Pete Walen: Fixed code, forgot to check it in. Twice. #hypotheticallyofcourse
Pradeep Soundararajan: Has a fix and marks fix before he commits the code and checks in the wrong one.
Stephen Hill: Programmer might not be using the same code build as the customer so does not get the bug.

Tracking Tool Problems

Sometime our tools introduce problems of their own.

Ben Kelly: Programmer fat-fingers the form response to a bug fix.
Nancy Kelln: Marked the wrong bug as fixed in the tracking tool.
Pradeep Soundararajan: The bug tracking system could have a bug that occasionally registers Fixed for any other option.
Pradeep Soundararajan: He might have overlooked a bug id and marked it fixed while it was the other
Zeger Van Hese: The dev wanted to re-assign, but marked it fixed instead (context: he hates that defect tracking system they made him use)

Process or Responsibility Issues

Perhaps there’s a protocol that should be followed, and perhaps it hasn’t been followed correctly—or at all.

Ben Simo: Perhaps it isn’t the responsibility of one programmer OR one tester to mark a problem fixed on their own.
Darren McMillan: Obvious: didn’t follow company procedure for fixes (fix notes, check lists, communications)
Dave Nicolette: Maybe programmer shouldn’t be the one to say the problem is fixed, but only that it’s ready for another review.
Michel Kraaij: The dev misunderstood the bug fixing process and declared it “fixed” instead of “resolved”.

Avoidance, Indifference, or Pathological Behaviour

We don’t like to think about certain kinds of behaviour, yet they can happen. As Herbert Leeds and Jerry Weinberg put it in one of the first books on computer programming, “When we have been working on a program for a long time, and if someone is pressing us for completion, we put aside our good intentions and let our judgment be swayed.”

Ben Kelly: Programmer is under duress from management to ‘fix all the bugs immediately’
Michel Kraaij: The dev is just lazy
Michel Kraaij: The dev is out of “fixing time”. To make the fake deadline, he declares it fixed.
Pradeep Soundararajan: Is forced by a manager or a stakeholder to do so
Pradeep Soundararajan: That way he is buying more time because it goes through a big cycle before it comes back.

Maybe the pressure comes from outside of work.

Erkan Yilmaz: Dev has import. date w. fiancé’s parents, but boss wants dev 2 work overtime suddenly, family crisis happens

There may be questions of competence and trust between the parties. A report from unskilled tester who cries “Wolf!” too often might not be taken seriously. A bad reputation may influence a programmer to reject a bug with minimal (and insufficient) investigation.

Ben Kelly: Bug was dismissed as the tester reporting had a track record of reporting false positives.

Perhaps someone else was up to some mischief.

Erkan Yilmaz: dev went 2 other person’s pc who was away + used that pc 4 fix/marking – during that he found info that invaded privacy

Sometimes we can persuade ourselves that there isn’t a problem any more, even though we haven’t checked.

Michel Kraaij: The dev has a huge ego and declares EVERY bug he touches as “fixed”.
Pradeep Soundararajan: He did some code change that made the bug unreproducible and hence considers it to be fixed.
Pradeep Soundararajan: Considers it fixed thinking it was logged on latest version & prior versions have it altho code base in production is old
Pradeep Soundararajan: Thinks his colleague already fixed that.
Stephen Hill: Has the person for whom the problem was ‘a problem’ re-tested under the same circumstances as previously?

Sometimes management or measurement exacerbates undesirable behaviours.

Ben Kelly: Programmer’s incentive scheme counts the number of bugs fixed (& today is the deadline)
Ben Simo: Perhaps programmer is evaluated by number of things mark fixed; not producing working software.
Lynn McKee: He is being measured on how many bugs he fixes and hopes no one will notice no actual coding was done.
Pradeep Soundararajan: Yielding to SLA demands to fix a bug within a specific time.

It is remarkable how a handful of experienced people can come up with a list of this length and scope. Thank you all for that.

Exploratory Testing or Scripted Testing: Which Comes First?

Tuesday, January 4th, 2011

The PDF file linked here is a transcript of a conversation over Skype, New Year’s Eve (December 31), 2010.

The conversation was prompted by a Twitter exchange on exploratory testing (ET) started by Andy Glover, who observed that “When developing scripts you need to explore. But this tends to be exploring with out the s/w so I would say it’s not ET.: I disagree; developing scripts is test design, and test design is certainly part of testing.  Since the process of developing test scripts is an exploratory (unscripted) process, I would contend that script development is both exploratory and testing, and therefore exploratory testing. To get around Twitter’s limitations, I proposed an impromptu online chat. Anna Baik, Ajay Balamurugudas, Tony Bruce, Anne-Marie Charrett, Albert Gareev, Mohinder Kholsa, Michel Kraaij, and Erkan Yilmaz joined the conversation. Alas, Andy had other commitments and couldn’t be with us.

Enjoy!