Blog Posts for the ‘Very Short Blog Posts’ Category

Very Short Blog Posts (31): Ambiguous or Incomplete Requirements

Monday, December 19th, 2016

This question came up the other day in a public forum, as it does every now and again: “Should you test against ambiguous/incomplete requirements?”

My answer is Yes, you should. In fact, you must, because all requirements documents are to some degree ambiguous or incomplete. And in fact, all requirements are to some degree ambiguous and incomplete.

And that is an important reason why we test: to help discover how the product is inconsistent with people’s current requirements, even though it might be consistent with requirements that they may have specified—ambiguously or incompletely—at some point in the past.

In other words: we test not only to compare the product to documented requirements, but to discover and help refine requirements that may otherwise be ambiguous, unclear, inconsistent, out of date, unrecognized, or emergent.

Very Short Blog Posts (30): Checking and Measuring Quality

Monday, November 14th, 2016

This is an expansion of some recent tweets.

Do automated tests (in the RST namespace, checks) measure the quality of your product, as people sometimes suggest?

First, the check is automated; the test is not. You are performing a test, and you use a check—or many checks—inside the test. The machinery may press the buttons and return a bit, but that’s not the test. For it to be a test, you must prepare the check to cover some condition and alert you to a potential problem; and after the check, you must evaluate the outcome and learn something from it.

The check doesn’t measure. In the same way, a ruler doesn’t measure anything. The ruler doesn’t know about measuring. You measure, and the ruler provides a scale by which you measure. The Mars rovers do not explore. The Mars rovers don’t even know they’re on Mars. Humans explore, and the Mars rovers are ingeniously crafted tools that extend our capabilities to explore.

So the checks measure neither the quality of the product nor your understanding of it. You measure those things—and the checks are like tiny rulers. They’re tools by which you operate the product and compare specific facts about it to your understanding of it.

Peter Houghton, whom I greatly admire, prompted me to think about this issue. Thanks to him for the inspiration. Read his blog.

Very Short Blog Posts (29): Defective Detection Effectiveness

Tuesday, July 14th, 2015

Managers are responsible for hiring testers, for training them, and for removing any obstacles that make testing harder or slower. Managers are also responsible for hiring developers and designers, and providing appropriate training when it’s needed. If there are problems in development, managers are responsible for helping the developers to address them.

Managers are also responsible for the scope of the product, the budget, the staffing, and the schedule. As such, they’re responsible for maintaining awareness of the product, of product development, and anything that threatens the value of either of these. Finally, managers are responsible for the release decision: is this product ready for deployment or release into the market?

Misbegotten metrics like “Defect Detection Percentage” (I won’t dignify references to them with a link) continue to plague the software development world, and are sometimes used to evaluate “testing effectiveness”. But since it’s management’s job to understand the product and to decide when the product ships, a too-low defect detection percentage suggests the possibility of development or testing problems, unaware management, or a rash shipping decision. Testers don’t decide whether or when to ship the product; that’s management’s responsibility. In other words: Defect Detection Percentage—to the degree that it has any validity at all—measures management effectiveness.

Very Short Blog Posts (28): Users vs. Use Cases

Thursday, May 7th, 2015

As a tester, you’ve probably seen use cases, and they’ve probably informed some of the choices you make about how to test your product or service. (Maybe you’ve based test cases on use cases. I don’t find test cases a very helpful way of framing testing work, but that’s a topic for another post—or for further reading; see page 31. But I digress.)

Have you ever noticed that people represented in use cases are kind of… unusual?

They’re very careful, methodical, and well trained, so they don’t seem to make mistakes, get confused, change their minds, or backtrack in the middle of a task. They never seem to be under pressure from the boss, so they don’t rush through a task. They’re never working on two things at once, and they’re never interrupted. Their phones don’t ring, they don’t check Twitter, and they don’t answer instant messages, so they don’t get distracted, forget important details, or do things out of order. They don’t run into problems in the middle of a task, so they don’t take novel or surprising approaches to get around the problems. They don’t get impatient or frustrated. In other words: they don’t behave like real people in real situations.

So, in addition to use cases, you might also want to imagine and consider misuse cases, abuse cases, obtuse cases, abstruse cases, diffuse cases, confuse cases, and loose cases; and then act on them, as real people would. You can do that—and helpful and powerful as they might be, your automated checks won’t.

Very Short Blog Posts (27): Saving Time

Wednesday, April 29th, 2015

Instead of studying and learning from every bug, you can save a lot of time by counting and aggregating bug reports.

That’s a good thing in its way, because if you don’t study and learn from every bug, you’ll need all the time you can get to deal with problems that seem to keep happening over and over again.

Very Short Blog Posts (26): You Don’t Need Acceptance Criteria to Test

Tuesday, February 24th, 2015

You do not need acceptance criteria to test.

Reporters do not need acceptance criteria to investigate and report stories; scientists do not need acceptance criteria to study and learn about things; and you do not need acceptance criteria to explore something, to experiment with it, to learn about it, or to provide a description of it.

You could use explicit acceptance criteria as a focusing heuristic, to help direct your attention toward specific things that matter to your clients; that’s fine. You might choose to use explicit acceptance criteria as claims, oracles that help you to recognize a problem that happens as you test; that’s fine too. But there are many other ways to identify problems; quality criteria may be tacit, not explicit; and you may discover many problems that explicit acceptance criteria don’t cover.

You don’t need acceptance criteria to decide whether something is acceptable or unacceptable. As a tester you don’t have decision-making authority over acceptability anyway. You might use acceptance criteria to inform your testing, and to identify threats to the value of the product. But you don’t need acceptance criteria to test.

Very Short Blog Posts (25): Testers Don’t Break the Software

Tuesday, February 17th, 2015

Plenty of testers claim that they break the software. They don’t really do that, of course. Software doesn’t break; it simply does what it has been designed and coded to do, for better or for worse. Testers investigate systems, looking at what the system does; discovering and reporting on where and how the software is broken; identifying when the system will fail under load or stress.

It might be a good idea to consider the psychological and public relations problems associated with claiming that you break the software. Programmers and managers might subconsciously harbour the idea that the software was fine until the testers broke it. The product would have shipped on time, except the testers broke it. Normal customers wouldn’t have problems with the software; it’s just that the testers broke it. There are no systemic problems in the project that lead to problems in the product; nuh-uh, the testers broke it.

As an alternative, you could simply say that you investigate the software and report on what it actually does—instead of what people hope or wish that it does. Or as my colleague James Bach puts it, “We don’t break the software. We break illusions about the software.”

Very Short Blog Posts (24): You Are Not a Bureaucrat

Saturday, February 7th, 2015

Here’s a pattern I see fairly often at the end of bug reports:

Expected: “Total” field should update and display correct result.
Actual: “Total” field updates and displays incorrect result.

Come on. When you write a report like that, can you blame people for thinking you’re a little slow? Or that you’re a bureaucrat, and that testing work is mindless paperwork and form-filling? Or perhaps that you’re being condescending?

It is absolutely important that you describe a problem in your bug report, and how to observe that problem. In the end, a bug is an inconsistency between a desired state and an observed state; between what we want and what we’ve got. It’s very important to identify the nature of that inconsistency; oracles are our means of recognizing and describing problems. But in the relationship between your observation and the desired state, the expectation is the middleman. Your expectation is grounded in a principle based on some desirable consistency. If you need to make that principle explicit, leave out the expectation, and go directly for a good oracle instead.

Very Short Blog Posts (23) – No Certification? No Problem!

Wednesday, January 28th, 2015

Another testing meetup, and another remark from a tester that hiring managers and recruiters won’t call her for an interview unless she has an ISEB or ISTQB certification. “They filter résumés based on whether you have the certification!” Actually, people probably go to even less effort than that; they more likely get a machine to search for a string of characters. So if you’re looking for a testing job, you don’t have a certification, and you have no interest in paying an employment tax to the rent-seekers here’s one way to get around the filter. At the bottom of your CV, add this sentence:

I do not have an ISEB or ISTQB certification, and I would be pleased to explain why.

An automated filter will put your résumé on the “read it” pile. Then the sentence should attract the attention of anyone who bothers to read it and who is genuinely interested in hiring a tester, at which point they’ll start looking at your real qualifications. And if they don’t contact you, you probably don’t want to work with them anyway.

Very Short Blog Posts (22): “That wouldn’t be practical”

Saturday, January 24th, 2015

I have this conversation rather often. A test manager asks, “We’ve got a development project coming up that is expected to take six months. How do I provide an estimate for how long it will take to test it?” My answer would be “Six months.”

Testing begins as soon as someone has an idea for a new product, service, or feature, and testing ends, for the most part when someone decides to release or deploy it. Testing happens at the same time as development does. Testing starts when development starts, and when development is done, testing ends.

In reply to that, the test manager sometimes says, “That wouldn’t be practical.”

That answer used to confuse me—it seems pretty impractical to develop something without exploring it, experimenting with it, and checking it pretty much continuously. But I now believe that “that” refers not to testing, but to the test manager’s fear of having a conversation with a development manager—one with a factory-oriented model of software development and testing—who is asking for the estimate.

So, some time ago, I wrote this post, to help people to work through the problem and to offer some solutions for it. The core message is that thinking of a project in terms of “a coding phase and then a testing phase” is like thinking of programming in terms of a “typing phase and then a thinking phase”. If reframing a misbegotten model of development is impractical, to me it seems vastly more impractical to live with the consequences of that model.