Blog: Very Short Blog Posts (3): The Software Is Already Broken

Some testers have got into the habit of saying that “we break the software”. That leads to psychological and political problems: “The product was fine until the testers broke it.” The software is what it is, either broken or not, when we get it. So, try saying “We look for problems that could threaten the value of the software.” As James Bach says, the only things we break are illusions.

Want to know more? Learn about upcoming Rapid Software Testing classes here.

7 responses to “Very Short Blog Posts (3): The Software Is Already Broken”

  1. Natalie Bennett says:

    This aspect of testing makes me feel like a super villain. “Fools! I have not caused you to fail! I have only revealed how you have already failed!”

    Michael replies: To take a page from Dr. Phil, “How’s that workin’ for ya?”

  2. Damian Synadinos says:

    “the only things we break are illusions”

    I disagree. Sometimes, I break dance.

    Michael replies: Or people’s concentration. 🙂

  3. Andy Glover says:

    I often say something similar to the developers “it wasn’t me! it was already broken!”
    Yet after speaking with other testers I think there can be value in approaching a testing session with the mentality you are going to break it. Perhaps thinking you’re going to break the software you get extra motivation and focus to the testing that wouldn’t be there without this thinking.

    Michael replies: Maybe. Myself, I get just as much mileage from treating the task as a puzzle—Where is it broken? How is it broken? Under what conditions would manifest that it’s broken?—while avoiding the idea that it’s broken “because I broke it”. Besides, a product can have serious problems that aren’t related to “brokennness”. “Works as designed” can mean exactly the same things as “doesn’t work—as designed”.

  4. DanAshby04 says:

    Is it not the case that testers COULD break the software though? Here’s an example I have in my head…

    Say the system I’m testing is running on a stand alone server which is SOLELY for the software to run on – with strict instructions to everyone not to run other tasks on it.
    And I decide to run a /valueless/ test where I run CPU exhaustive tasks on the server which eats up memory therefore having an affect on the system, causing it to break (or at least have performance problems/issues)….

    Can this not be construed as it being the valueless test that has caused the problems on the system?

    Just a thought! Please correct me if I’m wrong! 🙂

    Michael replies: I don’t know if you’re wrong, but I do disagree. First, if I read your description right, you’re breaking the system, not the software per se. Second, in this scenario, you disclaim any intention to do valuable testing work. So you’re not behaving as a tester, but as a vandal. In the same way, I could say “investigative reports don’t commit the crimes”, and you could give me an example of an investigative reporter who robbed a bank. Well, okay… but in that context, we’d describe him as bank robber, rather than an investigate reporter. Finally, your description of what you’re doing in your scenario is descriptive; my assertion that testers don’t break the software is intended to be normative.

  5. DanAshby04 says:

    Great points Michael!
    I guess what I’m trying to say is that a lot of this seems to come down to perspective…

    A developer could say that technically the software wasn’t broken until the test broke it and that the test might have been an unreasonable test by the tester.

    So in this scenario, there are multiple perspectives on the cause:
    – The software itself being the cause (with the bug not being visible until the test shows it).
    – The test being the cause (as it was the test that caused the bug to be displayed).
    – The developer being the cause (for coding the software that the bug exists in).
    – The tester being the cause (for running the test that caused the bug to be displayed).

    One thing is for sure! Being aware of multiple perspectives can help dispel any blame culture!

    Michael replies: A premise behind each one of your examples that the bug is present before the test, and that performing the test brings the bug to our awareness. That was my original point.

  6. Arslan Ali says:

    Confession!

    I have used the term but only while teaching about “Product Elements” (SFDiPOT). I would suggest to break the product into elements to provide more coverage to the tests! – but “Breaking the Product” as in “Crashing” or “Failing” it, is not what we have endorsed!

    🙂

    Regards
    Arslan

    Michael replies: Well, aren’t you the clever one? Good for you (and I agree). 🙂

  7. Adam Knight says:

    Michael,

    Richard Bradshaw (@friendlytester) echoed this same statement last week on twitter. In a brief twitter conversation (http://www.exquisitetweets.com/tweets?eids=D0i1uR74ZV.D0j6EI7qzQ.D0khfiHA1R.D0ktcytR9M.D0m35v7JU4.D0nmhvcE0q.D0nt1S1xdY.D0pOypp92a) I responded with a different view. Whilst I can understand why we’d want to suggest that the software was already broken, to avoid the idea that the bugs didn’t exist until our tests exposed them. I suggested that I felt that ‘breaking software’ is exactly what testers do. I followed up today with a blog post explaining my reasoning (http://www.a-sisyphean-task.com/2013/12/potential-and-kinetic-brokenness.html). To summarise my position on this – I think that any working system has the potential to break in various ways given its existing behaviour and the appropriate external factors which include inputs, environmental conditions or user expectations. Rather than considering the software to being already broken I prefer to take the view that the potential for breaking is already present and testers explore to find this potential for where it can be broken to establish how likely the factors required are to occur during live use. In order to do this we usually have to realise this potential by forcing the system into a ‘broken’ state with our tests. In this way breaking software is what we do, to demonstrate potential for failure to the business and allow them to decide whether that potential constitutes sufficient risk to need addressing.

    Cheers,

    Adam

    Michael replies: The binary “broken/not broken” is a simplification, and probably an oversimplification. The software is what it is, and does what it does, based on what people do with it. In my view, there’s a public relations problem with suggesting that testers create problems (“breaking the software”) rather than reveal problems (“showing that the software is broken”). For me, the image that the former calls up is that of someone hitting a vase with a hammer; a malicious and destructive action. Software is intangible and ephemeral, and bugs are not states in the software, but relationships between the software and some person(s). I’d like to change the image to one of investigating the software—as you suggest, identifying the circumstance in which some person might perceive a bug. “Broken” isn’t a very good image for that, but to the extent that it is, I’d like to make it explicit that the testers don’t break the relationship or the perception, either; they reveal it. As James Bach might put it, we break illusions about the perception of goodness.

Leave a Reply

Your email address will not be published. Required fields are marked *