DevelopsenseLogo

Very Short Blog Posts (25): Testers Don’t Break the Software

Plenty of testers claim that they break the software. They don’t really do that, of course. Software doesn’t break; it simply does what it has been designed and coded to do, for better or for worse. Testers investigate systems, looking at what the system does; discovering and reporting on where and how the software is broken; identifying when the system will fail under load or stress.

It might be a good idea to consider the psychological and public relations problems associated with claiming that you break the software. Programmers and managers might subconsciously harbour the idea that the software was fine until the testers broke it. The product would have shipped on time, except the testers broke it. Normal customers wouldn’t have problems with the software; it’s just that the testers broke it. There are no systemic problems in the project that lead to problems in the product; nuh-uh, the testers broke it.

As an alternative, you could simply say that you investigate the software and report on what it actually does—instead of what people hope or wish that it does. Or as my colleague James Bach puts it, “We don’t break the software. We break illusions about the software.”

17 replies to “Very Short Blog Posts (25): Testers Don’t Break the Software”

  1. It’s better to help programmers and managers make software at the quality level they had hoped. That’s what I want to do.

    Michael replies: Better than what, exactly? Helping programmers and managers make software at the quality level they hoped is exactly what I want to do too. My way of helping is to work right next to them, shining light on the product and finding problems that might threaten the value of the software. My intention is to alert them to those problems so that they can address them if they choose to do so. The builder’s mindset is essentially optimistic and hopeful—and necessarily so—but it’s psychologically challenging and risky to test a product when you’re at close critical distance. I take a different, more skeptical, and more critical focus in order to help prevent the builders from fooled.

    Reply
  2. This part of the industry always makes me feel like a super villain.

    “Fools! I have merely revealed how you have already failed!”

    Michael replies: I’m racking my brain, trying to remember if I mentioned something about public relations. 🙂

    Reply
  3. This is a great post. I’ve heard this description before, that testers don’t “break” software, but this nails it.

    One of the only things I can think of is that this notion of testers breaking software as a point of pride may have come about based on good testing experiences by confident developers. I’ve heard developers say that the best testers break their code in ways developers had never considered. This “breaking” helps inform developers of possible problems or areas of improvement.

    It might’ve been a positive comment in some contexts previously, but the convention has grown and evolved, and as you point out, created some negative side effects for most testers.

    Reply
  4. I’d love to hear what you think about this (potential) counterpoint:

    Sometimes I get software to test that I’d call “fragile”. One wrong move and it crashes or stops communicating or has an otherwise catastrophic failure.

    In one sense, sure, it was “broken” before I got it. In another, it “breaks” when I trigger the failure conditions (especially if it seemed to be working before).

    Thoughts?

    (For the record, I try to stay away from telling people I break software. I tell them I find software bugs so others don’t have to. Not perfect, but it gets the message across to most.)

    Reply
  5. When I quit one company I got a T-Shirt saying “I break software for a living”. I told the guys that I was disappointed because of course it should have said what James pointed out – I break the illusion of working software. My copy of “Perfect software and other illusions” is still making the rounds in that company…

    To reply to Richard,
    I would say that there is no “wrong move” when testing software and we should be carefully with that phrase. I understand where you’re coming from but it’s a loaded phrase putting blame on yourself weakening future discussions. I’d suggest something along the lines of “using alternative routes through the application rather than the mainstream”.

    In the past when I got a build like that I’d have a quiet word with the development manager to discuss the build quality. Saying that with a bit of extra care on the development site we save ourselves the time required for building, installing and the whole bug cycle thing.
    If the developers feel that, hand on heart, this is the best that they can deliver right now, that is the time it should go to the testers, not before. If it’s a case of “throw it over the wall”, coding finished we hit the deadline there are problems that should be discussed with the development and maybe the project manager before testers and devs fight it out.

    Reply
  6. “We break illusions about the software” sounds almost like “I shatter all their hopes and dreams.” All things being equal, I’d rather sound like I attack an inert mechanical construct (I break software) rather than claim I attack the individuals (I break the coder’s illusions) Of course we do none of those things. If we steal from structural engineering terms, we apply pressure to stress points to determine the point of failure. It’s a bit of a mouthful, however, and the shorthand “I broke the software” is a quicker way to get the point across.

    Michael replies: Quicker, yes—and inaccurate. It’s a little as though you’ve ignored all the examples of “the testers broke it” that I gave above. So why not try something just as quick and more accurate?

    • I investigated the software.
    • I found where the software is broken.
    • I found problems in the product.
    • I found bugs.
    • I tested.

    When you associate attacking someone’s illusion with attacking the person, I’m a little startled. An illusion is not something intrinsic to the person. It’s an idea. While I do agree with the notion of sparing people’s feelings, I think most people would agree that preserving an illusory hope or dream is not something we want to do in an engineering context.

    The idea of breaking illusions is useful simply as a contrast to the idea of breaking the software, but we don’t really need either one of them.

    Reply
  7. Michael – thank you, very succinct!

    Can’t help but think of another example: “Yesterday QA did regression of the build and logged N defects”.
    I get pretty much terrified every time I hear that.

    -Albert

    Reply
  8. “It might be a good idea to consider the psychological and public relations problems associated with claiming that you break the software.”

    Well said. This is a very important point. I used the phrase – “break software” on twitter, when James said ‘It was already broken’. It’s good to see that now there is a blog post to refer to.

    Thanks for the post.

    Reply
  9. “We don’t break the software. We break illusions about the software.” Indeed a powerful message.

    Michael replies: Thank you; I agree. Credit Where Credit is Due Dept.: James Bach is the originator of this, as far as I know.

    Reply
  10. […] Testers have an innate drive to imbue quality into everything we touch. This can lead to some boat rocking. Most software has some darker corners that could use a little care and attention. Don’t think testers limit themselves to testing software, we find bugs in processes, documentation, even just how people think about software. […]

    Reply
  11. Testers do not really “break software”. In fact software is delivered to us already broken and part of our job is to find areas where programs can fail.

    To some extent I agree that it is a useful assertion for several reasons;

    1. To say the Testers break software somehow implies that they have perhaps abused the software in an unrealistic way and therefore their activity is not valid.

    2. Testing serves many purposes in many different contexts, only one of which is finding failures (or causing the software to “break” if you will).

    3. If the Testers do not or cannot “break the software” (Whatever that might mean) does that mean the product is fit to ship? (No of course not necessarily)

    In engineering in general to what extent do Testers (Or other manifestations of QA and QC) really break the product? To answer this it is useful to consider an analogy with motor car manufacturing;

    If upon inspection by QC the wheel nuts of a car are found not to be done up to the specified torque, did QC “break the wheel” or did engineering deliver a product that was already broken? (The answer should hopefully be obvious).

    Admittedly if we extend the analogy then the argument for or against the use of the word “breaking” can descend into pure sophistry.
    i.e. Let’s say that the motor car QCs do not check the torque of the wheel nuts (Software testing analogy; Testers do not check the latency of a database call because it is a “white box” technique they do not have the tools, training, skills, or time for).

    But what they do is drive the car for 50 miles (Software testing analogy; Testers perform some basic User Acceptance Testing) and everything seems fine – the Testers have not broken the car (Or the software) so it must be fit to ship right?

    Wrong because during a more extensive performance test where the car is driven for 100 miles at an average speed of 40 mph it is found that the wheels fall off! (Software testing analogy; Performance testing in a like live environment where multiple customers make simultaneous calls via the same database stored procedure causes the system to crash).

    So in the first instance the motor car QC team and the Software Testers did not “break” the product so it was apparently fit to ship. In the second instance they pushed the limits of the product until it failed. Which I concede many people would say is the same thing as breaking it.

    However on the whole it is a dangerous over simplification to say that Testers “break software”. Because as we have seen it confers unreasonable expectations on the Testing process (Not least of which is the risky assumption that if Testers cannot break the software then it must be fit to ship).

    It also discourages consideration of all the areas where software development can fail and instead locates it solely within the realm of Testing.

    This again is a gross over simplification and to illustrate why it is useful to return to our analogy of the motor car QC process;

    Why were the wheel nuts not done up to the specified torque?

    Were the specifications wrong?

    If so why were the specifications wrong?

    Could this shortcoming have been identified and prevented much earlier in the process? (Almost certainly).

    Why did the Testers not have the skills, tools, or time to carry out performance testing or to inspect the database stored procedure more closely?

    The only conclusion I can arrive at is that the assertion that Testers “break software” is simply a convenient short hand for those who would prefer to ignore that everyone is responsible for quality; “oh it doesn’t matter if the requirements are badly worded or researched, or if the developers do not unit test their code – the Testers will “break it” if there is a problem. It is also possible that the term arose by comparison with product QA where “drop testing” is one of the most dramatic and visible activities i.e. where the QAs are seen to be actually “breaking” a product.

    Because Software QA is such a misunderstood subject it is easy to see how tempting it is to conflate the activity into something as easily understood as “breaking software”.

    Nevertheless for the reasons discussed above “breaking software” is a term that is to be discouraged as it is a gross over simplification of what Software Testing sets out to achieve. The only concession I will make to its use is to acknowledge that when a product is pushed to its limits so that if fails then this is virtually the same as breaking it. However for it to fail then it must already be in some way sub-standard or “bad” which is why I am willing to entertain the notion that we “break bad software”.

    In which case maybe Testers are “Breaking Bad Software Engineers”.

    Michael replies: I appreciate you thinking and writing your way through your thought process.

    Reply
  12. I would like to add that the concept of “break” or “broken” should be one that we should try to stay away from since it implies that the software was already working as intended. If I snap a broomstick in two I “broke” it. If you step on your sunglasses, you will break them. These items were whole and for argument sake, lets just say they were without flaw before they were intentionally broken. If a professor proof reads a students paper and finds grammatical errors is the paper broken? The student made an effort to write a paper and someone “proofread” and found errors. “Errors” should be the word more used since it suits the purpose more than the concept of “breaking” software. “Breaking” implies testers can change the nature of the code. Testers can have an effect on data which may cause code to have unintential behaviour but the code is still working as it was designed so it cannot be “broken” The pyschological damage is resonated in ideology that a tester can actually break something. We need to change the nature of the dialogue and focus on better communication. I would also like to point out that “break” implies that devs, testers and client are on the same page as to the intended behaviour of the software. More emphasis needs to be placed on fullfilling the needs of the clients.

    Reply

Leave a Comment