Evaluating Test Cases, Checks, and Tools

April 11th, 2021

For testers who are being asked to focus on test cases and testing tools, remember this: a test case never finds a bug. The tester finds a bug, and the test case may play a role in finding the bug. (Credit to Pradeep Soundararajan for putting this so succinctly, all those years ago.)

Similarly, an automated check never finds a bug. The tester finds a bug, and the check may play a role in finding the bug.

A testing tool never finds a bug. The tester finds a bug, and the tool may play a role in finding the bug.

If you suspect that managers are putting too much emphasis on test cases, or automated checks, or testing tools—artifacts—, try this:

Start a list.

Whenever you find a bug, make a quick note about the bug and how you found it. Next to that, put a score on the value of the artifact. Write another quick note to describe and explain why you gave the the artifact a particular score.

Score 3 when you notice that an artifact was essential in finding the bug; there’s no way you could have found the bug without the artifact.

Score 2 if the artifact was significant in finding the bug; you could have found the bug, but the artifact was reasonably helpful.

Score 1 if the artifact helped, but not very much.

Score 0 if the artifact played no role either way.

Score -1 whenever you notice the artifact costing you some small amount of time, or distracting you somewhat.

Score -2 whenever the artifact when you notice the artifact costing you significant time or disruption from the task of finding problems that matter.

Score -3 whenever you notice that the artifact is actively preventing you from finding problems—when your attention has been completely diverted from the product, learning about it, and discovering possible problems in it, and has been directed towards the care and feeding of the artifact.

Notice that you don’t need to find a bug to offer a score. Pause your work periodically to evaluate your status and take a note. If you haven’t found a bug in the last little while, note that. In any case, every now and then, identify how long you’ve been on a particular thread of investigation using a test case, or a set of checks, or a tool. Evaluate your interaction with the artifact.

Periodically review the list with your manager and your team. The current total score might be interesting; if it’s high, that might suggest that your tools or test cases or other artifacts are helping you. If it’s low or negative, that might suggest that the tools or test cases or other artifacts are getting in your way.

Don’t take too long on the aggregate score; practically no time at all. It’s far more important to go through the list in detail. The more extreme numbers might be the most interesting. You might want to pay the greatest or earliest attention to the things that score the lowest and highest first, but maybe not. You might prefer to go through the list in order.

In any case, as soon as you begin your review of a particular item, throw away the score, because the score doesn’t really mean anything. It’s arbitrary. You could call it data, but it’s probably not valid data, and it’s almost certainly not reliable data. If people start using the data to control the decisions, eventually the data will be used to control you. Throw the score away.

What matters is your experience, and what you and the rest of the team can learn from it. Turn your attention to your notes and your experience. Then start having a real conversation with your manager and team about the bug, about the artifact or tool, and about your testing. If the artifact was helpful, identify how it helped, and how it might help next time, and how it could fool you if you became over-reliant on it. If the artifact wasn’t helpful, consider how it interfered with your testing, how you might improve or adjust it or whether you should put it to bed for a while or throw it away.

Learn from every discovery. Learn from every bug.

Related reading:

Assess Quality, Don’t Measure It

Flaky Testing

February 22nd, 2021

The expression “flaky tests” is evidence of flaky testing. No scientist refers to “flaky experimental results”. Scientists who observe inconsistency don’t dismiss it. They pay close attention to it, and probe it. They redesign their experiments or put better controls on them.

When someone refers to an automated check (or a suite of them) as a “flaky test”, the suggestion is that it represents an unreliable experiment. That assumption is misplaced. In fact, the experiment reliably shows that someone’s models of the product, check code, test environment, outcomes, theory, and the relationships between them are misaligned.

That’s not a “flaky experiment”. It’s an excellent experiment. The experiment is telling you something crucial: there’s something you don’t know. In science, a surprising, perplexing, or inconsistent result prompts scientists to begin an investigation. By contrast, in software, an inconsistent result prompts some people to shrug and ignore what the experiment is trying to tell them. Then they do weird stuff like calculating a “flakiness score”.

Of course, it’s very tempting psychologically to dismiss results that you can’t explain as “noise”, annoying pieces of red junk on your otherwise lovely all-green lawn. But a green lawn is not the goal. Understanding what the junk is, where it is, and how it gets there is the goal. It might be litter—it it might be a leaking container of toxic waste.

It’s not a great idea to perform a test that you don’t understand, unless your goal is to understand it and its relationship to the product. But it’s an even worse idea to dismiss carelessly a test outcome that you don’t understand. For a tester, that’s the epitome of “flaky”.

Now, on top of all that, there’s something even worse. Suppose you and your team have a suite of 100,000 automated checks that you proudly run on every build. Suppose that, of these, 100 run red. So you troubleshoot. It turns out that your product has problems indicated by 90 of the checks, but ten of the red results represent errors in the check code. No problem. You can fix those, now that you’re aware of the problems in them.

Thanks to the scrutiny that red checks receive, you have become aware that 10% of the outcomes you’re examining are falsely signalling failure when they are in reality successes. That’s only 10 “flaky” checks out of 100,000. Hurrah! But remember: there are 99,900 checks that you haven’t scrutinized. And you probably haven’t looked at them for a while.

Suppose you’re on a team of 10 people, responsible for 100,000 checks. To review those annually requires each person working solo to review 10,000 checks a year. That’s 50 per person (or 100 per pair) every working day of the year. Does your working day include that?

Here’s a question worth asking, then: if 10% of 100 red checks are misleadingly signalling a problem, what percentage of 99,900 green checks are misleadingly signalling “no problem”? They’re running green, so no one looks at them. They’re probably okay. But even if your unreviewed green checks are ten times more reliable than the red checks that got your attention (because they’re red), that’s 1%. That’s 999 misleadingly green checks.

Real testing requires intention and attention. It’s okay for a suite of checks to run unattended most of the time. But to be worth anything, they require periodic attention and review—or else they’re like smoke detectors, scattered throughout enormous buildings, whose batteries and states of repair are uncertain. And as Jerry Weinberg said, “most of the time, a nonfunctioning smoke alarm is behaviorally indistinguishable from one that works. Sadly, the most common reminder to replace the batteries is a fire.”

And after all this, it’s important to remember that most checks, as typically conceived, are about confirming the programmers’ intentions. In general, they represent an attempt to detect coding problems and thereby reduce programmers committing (pun intended) easily avoidable errors. This is a fine and good thing—mostly when the effort is targeted towards lower-level, machine-friendly interfaces.

Typical GUI checks, instrumented with machinery, are touted as “simulating the user”. They don’t really do any such thing. They simulate behaviours, physical keypresses and mouse clicks, which are only the visible aspects of using the product—and of testing. GUI checks do not represent users’ actions, which in the parlance of Harry Collins and Martin Kusch are behaviours plus intentions. Significantly, no one reduces programming or management to scripted and unmotivated keystrokes, yet people call automated GUI checks “simulating the user” or “automated testing”.

Such automated checks tell us almost nothing about how people will experience the product directly. They won’t tell us how the product supports the user’s goals and tasks—or where people might have problems getting what they want from the product. Automated checks will not tell us about people’s confusion or frustration or irritation with the product. And automated checks will not question themselves to raise concern about deeper, hidden risk.

More worrisome still: people who are sufficiently overfocused, fixated, on writing and troubleshooting and maintaining automated checks won’t raise those concerns either. That’s because programming automated GUI checks is hard, like all programming is hard. But programming a machine to simulate human behaviours via complex, ever-changing interfaces designed for humans instead of machines is especially hard. The effort easily displaces risk analysis, studying the business domain, learning about users’ problems, and critical thinking about all of that.

Testers: how much time and effort are you spending on care and feeding of scripts that represents distraction from interacting with the product and searching for problems that matter? How much more valuable would your coding be if it helped you examine, explore, and experiment with the product and its data? If you’re a manager, how much “testing” time is actually coding and fixing time, in which your testers are being asked to fuss with making the checks run green, and adapting them to ongoing changes in the product?

So the issue is not flaky tests, but flaky testing talk, and flaky test strategy. It’s amplified by referring to “flaky understanding” and “flaky explanation” and “flaky investigation” as “flaky tests”.

Some will object. “But that’s what people say! We can’t just change the language!” I agree. But if we don’t change the way we speak —and the way we think along with it—we won’t address the real flakiness, which the flakiness in our systems, and the flakiness in our understanding and explanations of those systems. With determination and skill and perseverance, we can change this. We can help our clients to understand the systems they’ve got, so that they can decide whether those are the systems they want.

Learn about how to focused on fast, inexpensive, powerful testing strategies to find problems that matter. Register for classes here.

Necessary Confusion and the Bootstrap Heuristic

February 11th, 2021

I’m testing a test tool at the moment. I’m investigating it for a talk. The producers of the tool claim to have hundreds of thousands of users. A few positive remarks appear in a scrolling widget on the product’s web site from people purported to be users.

Me, I can’t make head or tail of the product. It doesn’t seem to do what it’s supposed to do. It looks like a chaotic mess. It’s baffling; it’s exasperating. I don’t know where to start in analysing it and preparing a report. I’m confused. But I’m okay with that.

Any worthwhile testing starts with some degree of necessary confusion.

Why? Because worthwhile testing is primarily about learning something about a product and learning about how to test it in a complex and uncertain space. That’s by nature confusing, and that’s normal.

If the test space is neither complex nor uncertain, and if there’s little risk, you may not need to test at all, and a simple demonstration might do the trick. Knowing that the product can work might be enough, for the moment. That’s why, for developers, performing checks and automating them at the unit level can make a lot of sense. Those checks tend to address specific, atomic conditions; they’re simple to develop and perform and encode; and they provide quick feedback without slowing down development.

A product gets built from small, discrete components. Through small, gradual changes, it turns into something much bigger and more complex, with interacting components and emergent behaviours that are non-trivial.

An encounter with anything non-trivial that you’re not familiar with tends to be messy and confusing at first. At the same time, as a working tester, you’re probably under pressure to “get things right the first time” or “get everything sorted from the beginning”. But having everything sorted really means that we’re at the end of something that was unsorted, and we’re at the beginning of the next unsorted thing!

In Rapid Software Testing, we refer to the Bootstrap Conjecture:

Any process we care about that is done both well and efficiently began by being done poorly and inefficiently.

Therefore, having “done something right the first time” probably means that it wasn’t really right, or it wasn’t really the first time, or that it was trivial, or that you got lucky.

In learning about something complex and in learning how to test it, there are frequent periods of confusion. In fact, if we’re dealing with something complex and we feel we’re sure about how to test it, that should prompt us to pause and reflect: why are we so sure?

Necessary confusion is confusion for which we do not have an algorithmic resolution. To resolve necessary confusion, we must explore a complex solution space using heuristics (that is, means of solving problems that could work but that might fail) and bounded rationality (that is, reasoning in a space where there are limits on what we know and what we can know). To overcome confusion, we have to play, puzzle, make conjectures, perform experiments, miss stuff, ask questions, make mistakes, and be patient. Necessary confusion always occurs during deep learning and innovation.

We’re often trained in our cultures, in our social groups, and in our schooling to deny that we’re confused. That gets ramped up as soon as we get into the software business: appearing not to know something is socially awkward; almost seen as a sin in some circles of knowledge work. Confusion can make us uncomfortable.

As a tester, you could just write (or worse, run) a bunch of automated scripts that check a new product or feature for specific, anticipated errors. If you do that without exploring the product and preparing your mind your testing will be blind to important bugs that could be there.

No set of instructions can teach you everything you need to learn about a product, and about the ways in which diverse people will try to use it. No formal procedure can anticipate how you or other people will experience the product. No testing framework will handle surprising behaviour without you learning how to deal with that framework. No tool, no “AI”, can determine whether the product is operating correctly, or whether a product manager will regard a red bar as something that amounts to an important bug. Complete and correct knowledge about those things isn’t available in advance.

You can learn how to test in advance. That will avoid some unnecessary confusion during testing. You can learn about the technology and domain of your product in advance, and that will avoid more unnecessary confusion during testing. You can learn to use particular tools in advance, and that might spare you some unnecessary confusion during testing too.

But you can’t deeply learn a new product or feature before encountering and interacting with it. The confusion you experience when learning a product is necessary, temporary, and healthy.

The key is to accept the confusion; to recognize that it’s okay to be confused. As we interact with the product and the people around it; as we gain experience; as we practice new skills and apply new tools, some of the confusion lifts.

Start with a survey of the product. Take a tour of the interfaces — the GUI, the command line, the API. Play with it. List out its key features. Create an outline of what is there to be tested. Consider who might use it, and for what. Build on your ideas of how they might value it, and how their value might be threatened. Think about data that gets taken in processed, stored, retrieved, and deleted. How could that get messed up?

And then iterate. Go through the same process with each function and feature, getting progressively deeper as you go. Maybe write little snippets of code to generate some data, or to analyze the output. (Have you been working with a product for a long time? This cycle is fractal; it applies to new functions or features, or to repairs in a product you know well.)

As we learn about the product domain; as we go about the business of sensemaking; as we develop our mental models; as we talk about the product and the problems we observe… more of the confusion dissipates. This can all happen remarkably quickly if we allow ourselves just a little time for experiencing, exploring, and experimenting with the product. Ironically, we must deliberately require and allow ourselves room for spontaneity. We need to be brave and open enough to help our managers understand how necessary that kind of work is — and how powerful it can be.

When we embrace the confusion and lean in, things begin to get clearer, our code and maps and lists get tidier, our notions of risk get sharper, and we’re better prepared to search for problems. And then we’re more likely to find the deep, dangerous problems that matter—the ones that everyone has missed so far. At the beginning, though, that process starts as we pull ourselves up by our own bootstraps.

The Bootstrap Heuristic is: begin in confusion; end in precision.

Oh… and that test tool that I’m testing? There’s a reason that I’m confused: I’ve got a confusing product in front of me. The product is inconsistent with claims that its producers make about it. The product’s behaviour is inconsistent with its purpose. It seems incapable of keeping track of its state. It provides misleading results. For outsiders, it seems designed to provide the impression that testing is happening, without any real testing going on. From the inside perspective of a tester, it’s baffling, and that’s largely because it doesn’t work.

So there’s another heuristic: persistent confusion about a product is often a pointer to serious problems in it. If you, as a tester, can’t make sense of a product, how will the product’s customers make sense of it?

After working with this product for a little more than an hour, much of the confusion I referred to above has evaporated, and I can prepare a report with confidence.

I’m only left with one thing that I find confusing:

How can anybody be fooled by a tool like this?

First Aid for the Mission Statement

January 23rd, 2021

A while back, a tester brought a patient in for treatment. It wasn’t a human patient; it was a sentence about building and testing in an organization. The tester asked me for help.

“Could you provide me with a first aid kit for this statement that came from my management?”

“We have to move on to DevOps to be able to release code more often but we also have to increase testautomation in any way we can and minimize manual time consuming testing.”

This is the sort of statement that needs more than first aid; it needs emergency room treatment. We’ll start with handling some critical problems right away to get the patient stabilized. Then we need to prepare the way for longer-term recovery, so that the patient can be restored to good health and become an asset to society.

I’ll suggest a number of quick treatments. As I do that, I’ll identify why I believe the patient needs them.

  • Replace “have to” with “choose to” in each case.

Unless someone is about to run afoul laws of nature, of government, or of ethics, no one has to do anything. People and organizations choose to do things. It’s important to preserve your agency. When you have to do things, you don’t have control over them. When you choose to do things, you remain in charge.

  • Replace “move on to DevOps” with “apply DevOps principles and practices”.

DevOps is not something to do; it’s a set of ideas and approaches for getting important things done. The central principle of DevOps—that development and operations people work together to support the needs of the business—is what matters most. If that principle is absent in the organization, there are practically infinite ways in which things will go wrong.

There will be more to say about DevOps-related practices and principles as we proceed.

  • Replace “to be able to” with “to be better able to”.

Not being a thing in and of itself, DevOps is not necessary to be able to do anything in particular. There were plenty of organizations building and releasing software successfully before DevOps, and there will be plenty of successful organizations long after DevOps is forgotten. Nonetheless, many of the ideas currently associated with DevOps could be very helpful.

DevOps doesn’t guarantee success, but applying its core principle might improve the odds.

  • Replace with “release code” with “release valuable products”.

The product is not the code, and the code is not the product. The product is the sum of code, software platforms, machinery, data, documentation, and customer support. The product is all of that together, delivering whatever experience, value, and problems that the customer encounters, good and bad. The code is part of the product, and the code enables the product. The code controls the mechanical parts of the product. That’s not to diminish the importance of the code. The code makes the product possible. If it’s a software product and there’s no code, there’s no product. If there is code, and it’s bad code, it’s less likely that we have a good product.

It’s a good thing to release valuable products. Therefore it’s a good thing to understand the code that we’re releasing—but not just the code, and not just its behaviour. That’s not trivial, but it’s the easy part of testing. The harder part of testing is understanding the relationships between the code, its behaviour and the people who will be using the product or otherwise interacting with it—including developers, operations people, and testers.

  • Replace “more often” with “at an efficient and sustainable pace”.

Delivering working software frequently is one of the principles behind the Manifesto for Agile Software Development. DevOps principles and practices are intended to support agility. Building efficiently and frequently—certainly a DevOps practice, but not a practice exclusive to DevOps—affords the opportunity to discover whether there are problems that threaten value before we inflict them on customers.”More often” isn’t the point, though, because “more often” might be good, bad, or irrelevant.

Consider the extremes.

When we build a product very rarely, problems get buried under layers of increasing complexity in the product and our inexperience with it. Testers become overwhelmed with the volume of learning and investigation (and, very probably, bug reporting) to be done.

When we build a product frenetically, we also get infrequent experience with it, because each new build comes along before we can gain experience with last one. In this case, testers get overwhelmed by the pace of the builds, and shallow testing, at best, is all that’s possible.

There is probably business value in being able to deploy software promptly, but that’s not the same as deploying software constantly. There might be contexts in which frequent deployment might provide real business value. In other contexts, a firehose of deployment can disrupt your customers, such that they don’t care about the deployment, but they do care about the disruption. The key is to recognize what context you’re in, and to minimize costly disruption.

One reasonable compromise in most cases is to set up systems to build the product easily and therefore frequently. In the build and in the processes leading up to it, include a smattering of automated checks to provide a quick alert to problems close to the surface. From the stream of builds, choose one periodically, and spend some time testing each one deeply to find rare, hidden, or subtle problems that can escape even a disciplined development process.

  • Replace “but we also” with “Thus”.

The first clause in the statement sets up the second clause. The first clause doesn’t undermine the second clause; the first clause puts legs under the second.

In our emergency treatment, let’s use “thus” to reattach the legs to the body. Also, we’ll add a sentence break, giving the patient a little more room to breathe.

  • Replace “we have to increase” with “we choose to apply”.

We’ve already covered the “we have to” part of this replacement.

Increasing something is not necessarily a good thing. In engineering work, everything is a tradeoff between desirable factors. Every activity that we might consider valuable comes with some degree of opportunity cost, reducing our capacity to do other things that we might also consider valuable. When we choose to apply something, we can choose to apply it more, or less, or just as we’re currently doing, to obtain the greatest overall benefit.

  • Replace “testautomation” with “powerful tools”.

I’m sincerely hoping that testautomation is a typo, rather than a new term. We’ll put a space in there.

There are many wonderful ways to apply tools in testing. Automated checking is one of them; only one of them.

Tools can help us to build the product, to prepare for testing, and to reconfigure our systems efficiently for better coverage. Tools can help us to probe the internals of the system to see things that would otherwise be invisible. Tools can help us to collect and represent data for analysis; to see patterns in output. Tools can help us to record and review our work.

Replacing “test automation” with “powerful tools” could help reduce the risk that “test automation” will be interpreted only as “output checking”.

  • Replace “in any way we can” with “in ways that help us”.

There are some things that we can do that we probably don’t want to do. We can serve steak with turpentine sauce, but no one should eat it and it’s a waste of good turpentine.

Tools can definitely help us with building the product quickly, and with identifying specific functional problems that might threaten its value. Tools can also be overapplied, reducing our engagement and human interaction with the product.

Fixation on tooling to exercise functions in the product can be a real problem if we forget that people use software. Even if our product is a software service, its API is used by people directly—programmers putting the API to work—and indirectly—end users who interact with the product through an interface designed for non-programmers. Functional correctness is important; so are parafunctional elements of the product: usability, performance, supportability, testability…

Tools can help us to focus our attention on important observations. Tools can also dazzle or distract us, diverting our focus from other important observations. We choose to apply tools not in any we can, but in ways that help—and our choices can include changing or dropping tools when they’re not helping.

Consequently, it might be a good idea to remember what we’re using the tools for. So, in addition to the replacement, let’s add “to build the product, to understand it, and to identify problems efficiently”.

  • Replace “and minimize manual time consuming testing” with… something else.

This particular wound has become infected, and there’s a lot of debris in it. It requires a fair amount of cleanup, emergency surgery, and some stitches.

One principle of DevOps is the idea that teams use “practices to automate processes that historically have been manual and slow”. That’s a good idea for tasks that can be mechanized, and that benefit from being mechanized. It’s not such a great idea to forget that many tasks—and parts of tasks—are non-routine, rely on expertise and tacit knowledge, and can’t be made explicit or mechanized.

Programming involves strategizing, interpreting, designing, speculating, reflecting, analysing… Testing involves all of those things too, and more. Organizations reasonably want programmers work quickly, but no one suggests “minimizing manual time-consuming programming”. This is because no one considers programming a manual process. Programming is an intellectual process; a cognitive process; a social process. So is testing.

Programmers type; that’s the manual part of programming. Just as for programming, the central work of testing is not the typing. No one refers to the typing part as “manual programming”; and when a programmer sets a build process in motion or takes advantage of plugins in an integrated development environment, no one refers to “automated programming”.

Just as there is no manual programming, and no automated programming, there is no manual testing, and there is no automated testing. There is testing.

“Manual” and “Automated” Testing

The End of Manual Testing

Anything worth doing requires some time and effort, and we usually want to apply our limited time and effort to things that are worth doing. It does make sense to minimize or eliminate the amount of time that we spend on unimportant things. It also makes sense to apply appropriate effort to things that are worth doing; to maintain their value; and to increase that value where possible.

Just as some tasks in programming can be carried out by time-saving tools like compilers, some of the tasks in testing can be carried out by time-saving tools like automated checks.

Compiling, though, is not the central task of programming. The central task of programming is modeling and expressing things in the human world in a way that machinery can deal with them. We can then use that programmed machinery to extend, enhance, accelerate, intensify, or enable human capabilities. All that requires a significant degree of preparation, technical savvy and social judgement—and time for cycles of design, experimentation, learning, and refinement. No one should complain about this taking the time it needs.

Checking is not the central task of testing. The central task of testing is the search for problems that matter—ways in which the software fails to meet the needs or desires of its users, or introduces new problems of its own. That requires not only checks for problems that we can anticipate, but a search for problems that we didn’t anticipate.  Like development work, testing work also requires a signficant degree of preparation, technical savvy and social judgement—and time for experiencing, exploring, discovery, and investigation. No one should complain about this taking the time it needs.

At least one class of testing tasks is even faster than running automated checks: the tasks that we choose not to do, because cost, value, and risk don’t warrant them. It’s worth the investment to pause every now and then to assess the relative value of unattended automated checks; instrumented tool-supported testing; and direct, unmediated experience with the product.

The trick here is to set ourselves up to do the fastest, least expensive testing that fulfills the mission of finding problems that matter before it’s too late. One way to get there is to apply fast, easy, non-interruptive checks that don’t slow down development. Then, periodically, do deep testing to find rare, subtle, hidden, intermittent, emergent bugs that might elude even highly capable and well-disciplined programmers.

So, replace “and minimize manual time consuming testing” with with “We want to minimize distracting, unhelpful, or unnecessary work. We want to and maximize our ability to evaluate and learn about the product both efficiently AND sufficiently deeply.”

With all those replacements, the text might be longer, but it’s more accurate and more precise; bigger and stronger.

So:

We choose to move to DevOps to be better able to release valuable products at an efficient and sustainable pace; thus we choose to apply powerful tools in ways that help us to build the product, to understand it, and to identify problems efficiently. We want to minimize distracting, unhelpful or unnecessary work. We want to maximize our ability to evaluate and learn about the product both efficiently AND sufficiently deeply.

Every recovering patient can use motivation and support, so we’ll set our patient up with a motivating and supporting statement, and send them on their way together.

As developers, testers, and operations people working togther, our goal is to enable and support the business by delivering, testing, building, and deploying valuable, problem-free products. As testers, our special focus is to help people to become aware of any important problems that would threaten the value of the product to people that matter; to help our clients determine whether the product that they’ve got is the product they want.

Rapid Software Testing Explored Online for North American time zones runs March 1-4, 2021. Learn more about the class, and then register!

Bug of the Day: AI Sees Bits, Not Things

January 4th, 2021

An article that I was reading this morning was accompanied by a stock photo with an intriguing building in the background.

Students throwing their graduation caps in the air

I wanted to know where the building was, and what it was. I thought that maybe Chrome’s “Search Google for image” feature could help to locate an instance of the photo where the building was identified. That didn’t happen, but I got something else instead.

An assortment of images of migrating geese

Google Images provided me with a reminder that “machine learning” doesn’t see things and make sense of them; it matches patterns of bits to other patterns of bits. A bunch of blobby things in a variegated field? Birds in the sky, then—and the fact that there are students in their graduation gowns just below doesn’t influence that interpretation.

That reminded me of this talk by Martin Krafft:

The MIT network’s concept of a tree (called a symbol) does not extend beyond its visual features. This network has never climbed a tree or heard a branch break. It has never seen a tree sway in the wind. It doesn’t know that a tree has roots, nor that it converts carbon dioxide into oxygen. It doesn’t know that trees can’t move, and that when the leaves have fallen off in winter, it won’t recognize the tree as the same one because it cannot conclude that the tree is still in the same position and therefore must be the same tree.”

Martin Krafft, The Robots Won’t Take Away Our Jobs: Let’s Reframe the Debate on Artificial Intelligence, 14:30</p>

Then I had another idea: what if I fed a URL to the image above to Google Images? This is what I got:

Results from a Google Image search, given a link to an image

Software and machinery assist us in many ways as we’re organizing and sifting and sorting and processing data. That’s cool. When it comes to making sense of the world, drawing inferences, and making decisions that matter to people, we must continue to regard the machinery as cognitively and socially oblivious. Whether we’re processing loan applications, driving cars, or testing software, machinery can help us, but responsible, socially aware humans must remain in charge.

(A couple of friendly correspondents on Twitter have noted that the building is the Marina Bay Sands resort in Singapore.)

Bug of the Day: What Time Are the Class Sessions?

December 17th, 2020

One problem that we face in software development and testing is that data and information aren’t the same. Here’s an example, prompted by email from a correspondent.

There’s a Rapid Software Testing Explored class running January 11-14, 2021. It’s set to run at times that work for people in Europe and the UK, mostly. The service I use for managing registrations, Eventbrite, offers the opportunity to list the starting and ending times for the class. So far, so good.

The class starts 12h00 Central European time, on January 11, 12h00. The class lasts for four days. Each day, there are three webinars of 90 minutes, with a half-hour break between each one. Thus the class ends at 17h30 Central European time, on January 14. How should this be displayed on the landing page for the event?

Eventbrite offered a form for me to fill in the starting and ending date and time for my event. I filled it in. Then Eventbrite provides an option to display the start time and ending time of the class on the landing page for the event. When I accept both options, the page duly presents the class as starting on the start time (2021/01/11 06h00 EST), and ending on the ending time (2021/01/14 11h30). Those times are entirely, factually correct as data. That correctness is pretty easy to check, too.

A person who wrote from Europe wanted to register for the class wrote to ask if he should assume that the class ran from 12h00 to 17h30 on the first day and from 8h30 to 17h30 on the second, third, and fourth days. If you’re like me, and you already know the timetable for the class (you do; I just told you), the writer’s assumption might seem strange—but that’s from the perspective of people with insider knowledge, like you and I. There’s no particularly good reason to label that evaluation as strange from an outsider’s perspective.

The issue here is that, in its template for displaying an upcoming class, Eventbrite allows me to check a box to show the start date and time, and another box to show the end date and time. There isn’t an option to display the dates alone, without the time, nor is there an option to display the date range with starting and ending times for each day of the class.

Is that a bug? Hard to say. From the perspective of someone writing code to gather the data and display the page, it’s almost certainly not a bug—not a coding bug. If the requirement is to “display the starting and ending date and time of the event“, the code gathers that data from me and displays it correctly to my customers. But correctly doesn’t mean informatively.

Is it a bug in that the expressed requirement is wrong, then? Also hard to say. First, I haven’t seen the requirements document. I suspect that Eventbrite’s business is mostly single-day events, so the issue probably doesn’t come up that often, relative to the majority of cases. But it does come up for some people, and for some events. It did for me, and for my customer, this time.

Should Eventbrite be able to display the start and end times for each day of a multi-day event? Maybe. But that would be more complicated to code and harder to test. Maybe it’s not worth the trouble and the risk of trying it.

Should the start and end times be displayed with a time zone beside them? They are. Should those time zones be chosen relative to where the event is happening, or relative to the time zone for the person who is looking at the site? Eventbrite seems to provides the latter, but maybe it doesn’t; maybe it shows Eastern Time worldwide.

It doesn’t take long to enter the rabbit hole of possibilities: if the time is displayed relative to the viewer’s start and end time, what if that user is connecting to the page via a VPN in a time zone different from hers? I tried this, and it seems either Eventbrite figures out the time on my local system OR it displays its times in Eastern time worldwide. How can I be sure what gets displayed in Europe? Will European users be confused if they see the start and end times rendered as North American Eastern time?

What if the user will be traveling, and wants to know the time of the event where it’s being held? (This sure isn’t a problem in December 2020, but what happens when we’re travelling again?)

Should Eventbrite offer an option to display the date alone, and let those running the event identify the daily schedule some other way? Probably, but who’s to say?

And imagine that you’re working at Eventbrite: what should a tester’s role be in all of this?

Here’s what we say in RST: it’s the role of designers, programmers, and managers to develop requirements, designs, and programs that transform the complex, messy, social world of people and their needs into the simpler, cleaner, world of machines and their very stilted languages. It is the tester’s role to look for and to find problems in those transformations, so that the designers, programmers, and managers can recognize those problems and make decisions on how to deal with them.

To fulfill our role, we must experience, explore, and experiment with the product and its requirements. We must develop an understanding of how people might use the product, and how they might be perplexed or surprised or annoyed by it.

When the product is being put in front of people who haven’t seen it, we must struggle to maintain the perspective of the first-timer. When the product is placed in a domain in which it will be used by experts, we must develop expertise in that domain, as quickly and as deeply as we can.

The tester can participate in the development of requirements, design, and code, and can make suggestions about them. But anyone else can do that too—documentation people, customer support people, customers,…

What makes testers special in all this is the testers’ focus on problems. It’s our abiding faith that there are problems, and that those problems might matter to people who might be forgotten by the builders. It’s the tester’s special job to consider how the insider’s perspective might be different from the outsider’s perspective. Some people on the team might consider those things. No one else on the team is focused on them.

It’s the tester’s job to raise questions about the product, its requirements, and its design, and ask “Is there a problem here? Might there be a problem here? Is everyone okay with the product we’re developing? Is everyone willing to live with the problems that we’re aware of?” This is often socially awkward, because people who are focused on solving problems (like developers and designers and managers) often find it distracting and to some degree irritating to hear about new ones. Don’t you?

And, in this case, here’s the rub: the data and the display can be correct, but still fail to solve a problem for the someone who wants to know “What are the danged class times for each day?” Some people guess (and guess correctly); others are willing to wait for an answer (those people find out on a page that gets displayed after they register); and some people write to ask me. That underscores another point: a bug is not a property of a product; it’s a relationship between the product and some person.

It turns out that daily start and end times are hard to express in machine-friendly data structures, but easy to express in the free-form text description of the class that Eventbrite also affords on the class’ landing page. So upon recognizing the problem for one of my customers, and that the problem mattered to him, that’s how I addressed it.

If you’re interested in all this, you might be interested in the Rapid Software Testing Explored class, where we examine the nature of problems and how to look for them and report them skilfully to your clients.

Again: the class runs on four consecutive days, starting at noon CET. Each day, there are three webinars of 90 minutes, with a half-hour break between each one. Just so you know.

Bug of The Day: Bad Data Means Search for Book Title Fails

December 14th, 2020

This is your periodic reminder that data has problems, just like code does.

A correspondent on LinkedIn pointed me towards a book by George Lakoff, an author I admire. For some reason, I had not been aware of the book. So I looked it up. I wanted to go straight to it, so I put the title in quotes:

Where Mathematics Comes From

Hmmm. That’s a little strange. Nothing? Let’s try without the quotes.

Where Mathematics Come From

Do you see the problem? Do you see why the quoted search string didn’t work? It looks to me like there’s a bad entry in a database somewhere.

Data is messy. Data is often wrong. Data can trip up functions that might otherwise appear to be working fine.

Data needs to be checked and examined critically, just like program code does; and so do the interactions of good and bad data with program code. Otherwise, you might lose a sale, mess up a payment, or open the door to a security breach without noticing. That’s why, in Rapid Software Testing, we use a variety of ideas for covering the product and the things around it with testing.

Sure, you might have automated checks set up for certain functions and workflows through your product. That’s fine, and a good thing. Are you using the power of automation to help find problems with your data?

A Naïve Request from Management

October 21st, 2020

A tester recently asked “If you’re asked to write a ‘test plan’ for a new feature before development starts, what type of thing do you produce?”

I answered that I would produce a reply: “I’d be happy to do that. What would you like to see in this test plan?”

The manager’s reply was, apparently, “test cases covering all edge cases we’ll need to test”.

That’s a pretty naïve request. Here’s my answer:

“Making sure the product handles edge cases properly is definitely an important task. If I were to take your request literally—test cases covering all edge cases we’d need to test—it could take a lot of time for me to prepare, and a long time for you to review and figure out all the things I might have left out.

“And there’s another issue: I don’t know in advance what all the edge cases are, or even what they might be—and neither do the developers, and neither do you. No one does. But that’s okay! We can start right now by learning about possible edge cases through testing. We can’t perform testing on a running product yet, obviously, but we can perform some thought experiments and test people’s ideas about the product.

“So how about I give you a short summary—a list or a mind map—of some of the broad risk areas we can start considering right away? We can share the list with the developers to help them anticipate problems, defend against them, and check their work. That will greatly reduce the need to test edge cases later, when the product has been built and the problems are harder to find.

“We can add to that risk list as we develop the product—and we can take things off it as we address those risks. That will help focus the testing work. When we start working with builds of the product, I’ll explore it with an eye to finding edge cases that we didn’t anticipate. And I’ll keep the quick summaries coming whenever you like. You can review those and give me feedback, so that we’re both on top of things all the way along.”

The software business, alas, still runs on folklore and mythodology about testing. Too few managers understand testing. Many managers—and alas, many testers—don’t realize that testing isn’t about test cases, but are nonetheless addicted to test cases. When we provide responsible answers to naïve questions, we can help to address that problem.

I’m presenting Rapid Software Testing Explored Online November 9-12, timed for North American days and European/UK evenings. You can find more information on the class, and you can register for it.

James Bach teaches in European daytimes December 8-11. Rapid Software Testing Managed is coming too. Find scheduling information for all of our classes.

Regression Testing and Discipline

October 9th, 2020

Another tester on an “Agile” team complains of being overwhelmed by the volume of regression testing he says he must do at the end of each sprint.

Why are some development organizations fixated on regression testing? Not why do they do it (that can be quite reasonable), but why are they fixated on it? I have a theory.

It goes without saying that every change to the product or system holds the risk of problems that could cause quality to backslide in some sense. That’s regression, slipping backwards to some presumably less advanced state. Regress is the opposite of progress.

With change, there’s a risk of regression, so it seems sensible to focus some testing on that risk. But is testing a sure-fire, reliable way to deal with the risk of regression?

Sure-fire? No. Testing can certainly help to find bugs, so that bugs can be recognized and dealt with. But no matter how thorough testing is, or how early it starts, testing can miss bugs too. So let’s remember that the easiest bug to deal with is the one that is never hatched in the first place; the next easiest is the one that gets squashed before it can bury itself in a mass of code.

No matter how skillful or powerful the testing, to some degree, finding a bug remains a matter of luck. In the face of regression risk, we’d prefer not to leave things at that; better to start with fewer bugs to reduce our dependence on luck. Thus, it would seem like a good idea for the people making the changes to avoid bugs by working in a careful and disciplined way.

Discipline, says Chambers, is “1. training designed to engender self-control and an ordered way of life; 2. The state of self-control achieved by such training.” The idea of self-control suggests the idea of agency, which is essential to exploratory work, which is in turn essential to engineering work.

Depending on the product, the project, and the preferences of the individual programmer and the programming team, what might we see and hear as they did disciplined work? Try pausing for a moment to remember the scene when you noticed people doing work you considered “disciplined”.

How’s your list? Here are a few things I’ve seen and heard from time to time in work I’d call “disciplined”:

  • When a change or a new feature was on the table, groups of people reviewed and discussed ideas to understand the change and the motivation for it. Talk was focused on making the system better, and on the problems that the changes were intended to solve. But that focus softened and sharpened, zoomed in and zoomed out, and moved around to help people see everything they could see—including problems. People often disagreed, but they were willing to try little experiments to sort out the disagreements.
  • I’ve seen people consulting with colleagues and with users to get a variety of ideas about design, implementation, and risk. Conversations happen at desks and in conference rooms, but also outside the office, in restaurants, eating, drinking, joking, walking, playing games, shopping… Discipline gets relaxed sometimes. Social life can foster trust and responsibility that helps people aspire to discipline.
  • I’ve seen people using talk, text, tables, sketches, diagrams, stories, mind maps, toys, and props to help describe things in lots of different ways for analysis and for memory. Disciplined work often seems associated with careful note-taking, too.
  • In disciplined shops, order doesn’t necessarily come right away; sometimes it has to be bootstrapped. Stuff tends to start messy and get more tidy if it needs to; when things get too formal too soon, ideas get lost. Development work is one way of life, and a self-controlled, ordered way of life often starts with being uncontrolled and disordered when we’re starting to build something new. Order emerges.
  • Some disciplined places were quiet and focused, but in others I heard lots of regular background chatter, too. Highlights were stories about how people solved problems—and created new ones on the way. Storytelling of this kind helped people to think about risk in a vivid way, which prompted thinking about discipline.
  • I’ve heard open and honest disagreement when there were things worth disagreeing about. I’ve people getting upset… and taking responsibility for working things out. Discipline isn’t always smooth.
  • I saw builders paying attention to testability—which includes simplicity, cleanliness of code, modularity, visibility, and controllability—to make it easy to do less expensive deeper testing later on.
  • In the disciplined shops, the developers were resolved not to take on too much change all at once. They would make patient, careful, reflective, unhurried changes, and try them out themselves. When they felt the work was ready for other people, they’d make it easily accessible, asking for and getting feedback right away.
  • While designing, building and trying things, developers would try to anticipate potential exceptions and error conditions, and they’d generally be quite successful. Then they would give the product to someone else to test, whereupon they would learn something about what they had missed.
  • Developers who were really good at debugging carefully tried out specific little changes as they worked on solving a problem.
  • The disciplined builders would tend to have a sober preference for reliable, widely-used, field-tested components over a mad rush to implement new stuff developed from scratch. As a consequence, there tended to be fewer surprising bugs.
  • I’ve seen programmers whose style was test-first or test-driven development—and who were given the time to apply it. And I’ve worked with disciplined programmers who don’t bother with TDD, exercising discipline in other ways.
  • I’ve seen code that contained inline assertions in debug builds. I’ve seen exception handling built into the product and logs to report on its status. (Every now and again, I see well-thought-out, helpful error messages.)
  • I’ve seen see developers checking their own work with configuration checks, unwanted-change detectors, and unit testing, including programmed output checks.
  • I’ve watched people spending hours and days in each other’s offices or cubicles, doing pair programming for immediate, real-time review.
  • I’ve seen formalized review sessions throughout—wherein new developers learned from more senior developers and, interestingly, vice-versa.
  • I’ve seen developers using lots of appropriate tools to see hidden things, or to see unhidden things in different ways (e.g. IDE syntax checking while writing code; attention to compiler warnings; database schema diagramming; dependency checking; profiling for performance; etc….);
  • I’ve seen consistent refactoring for readability, maintainability, and portability; paying down technical debt, as they say.
  • I’ve listened in on discussions about the development of shared coding styles, which also helped with readability.
  • I’ve observed developers keeping careful notes about setup procedures and configuration settings.
  • I’ve watched the entire team working collaboratively throughout so that there are lots of eyes and minds to notice things that could go wrong.
  • I’ve seen teams cultivate good relations with technical support.
  • I’ve noticed disciplined people who went home consistently on time. Also, disciplined people who stayed late from time to time.
  • In disciplined shops, I’ve seen shared skepticism about the completeness, accuracy, or relevance, of requirement statements, acceptance criteria, or a “definition of done”. Amidst optimism, I’ve noticed a suspended certainty about whether things were really done.
  • Disciplined shops often do frequent bursts of shallow, non-invasive interactive testing near the coal face, to help confirm that what the programmers were doing is reasonably close to what they intended to do.
  • I’ve seen project managers provide support staff, including people to set up test systems, to help keep track of the backlog, and a group administrator to help the manager in acquiring resources.
  • I’ve seen frequent building, to make builds for deep testing and bug fixes available at the drop of a hat. But I’ve also seen relatively infrequent yet still reliable building, too.

These are ideas and practices I’ve seen people applying to help them keep on track while building products. Most or all of these things would be done by the developers in collaboration with people working reasonably close to them (some of those people might be testers, and others might not be).

Each item on the list lends a kind of discipline to a development process. Each one represents something people might mean when they murmur something vague about “building quality in”. They’re heuristics, not rules. No one did all of them. I’ll bet you’ve got a ton of stuff on your list that’s missing from this list. Notice, too, how each item above could represent disciplined action in one context and a lapse of discipline in some other context.

Discipline doesn’t have to be burdensome, bureaucratic, or otherwise slow. Informal actions can support discipline, and help people find out where they might need to apply discipline. Remember, according to Chambers, discipline means “self-control to obtain an ordered way of life”; the self-control part suggests that discipline comes from within, rather than being imposed from outside.

Some forms of discipline might feel slow to some, at first, but prudent driving feels slow to people who are used to driving recklessly. When we’re driving, we almost always drive more slowly than we could possibly drive. Driving faster than that increases the risk that we’ll arrive late—or not at all.

Some of the discipline-related activities above represent some form of testing; others don’t. However, the processes of building a product are very different from the processes of experiencing a product. Bugs, especially of the latter kind, can elude even a disciplined development process. Accordingly, it makes sense for there to be different kinds of testing: testing for examining a product as it’s being built; and testing for obtaining experience with the built product.

So when builds are available, it’s probably wise to do some periodic deeper testing, some of it focused on potential, reasonably foreseeable, undesirable effects and side effects of a change—the risk of regression. That regression testing can be far better targeted when the product has been carefully built and already tested to some degree.

Deep testing doesn’t have to happen on every build; indeed, it probably shouldn’t. In lots of places, it can’t. Testing for hidden, rare, subtle, intermittent, emergent bugs tends to take time—the kind of time that can interrupt or slow down development. It can take a while to set up data and tools for deep testing. When systems have complex interactions, problems emerge at the interfaces between things that worked fine on their own. Working out those interactions and studying them in a search for problems can take time. That time might be worthwhile when safety or health or money are on the line. If there’s discipline in the building, the rewards of testing a build deeply tend to dominate the risk of skipping a few well-controlled builds.

Critical distance can aid deep testing to be done by people at some critical and even social distance from the people who are changing the product. Risk is a big deciding factor on that score—including the risk of regression.

And there’s the rub. In many organizations, people don’t mandate, or foster, or do well-disciplined work; or they exercise discipline in a very shallow way, cherry-picking one or two items from the list above, and ignoring the others. In such organizations, it seems as though the object is for the developers to write code, rather than to write code that works.

But perhaps, triggered by subconscious recognition of the risk of regression, managers (and, often, testers) feel compelled to do an overwhelming amount of expensive work: sitting at the keyboard and repeating every scripted test procedure that has been performed before, as quickly as possible. When you ask them why, they often reply, “because the developers have no idea of what might be affected by this change.” Then some of them proceed to convert those scripted procedures into automated scripted procedures, whereupon they gain a second undisciplined development project and a new maintenance nightmare. And they feel even more overwhelmed.

If someone feels overwhelmed, that’s a sign that there’s something probably something overwhelming going on.

If the developers really do have no idea about what might be affected by change, then that’s a problem—one that the organization should definitely address. It’s like the principle that you shouldn’t try to automate a process that you don’t understand; when you’re working with something important, you shouldn’t rush to change it unless and until you’ve got a reasonably good idea of the extents and effects and risks of the change, and how to manage them.

Now: there’s a problem here for testers. Testers don’t design, write, or fix the code. Many testers don’t have significant programming experience; of the few who do, few have experience with writing production code. Testers don’t manage the project, and very few testers indeed have been project managers. Testers don’t manage the developers. In light of that, it’s inappropriate, in my view, for testers to tell programmers and managers how to do their jobs. Testers cannot and should not try to force, or enforce, discipline.

It’s quite reasonable, though, for testers to report on problems with the product. It’s reasonable for testers to identify patterns of problems related to particular coverage areas or quality criteria. It’s reasonable for testers to report on patterns of regression-related problems.

It’s also reasonable for testers to report on where testing time is going. If investigating and reporting shallow bugs is dominating testing work, testers will obtain less thorough coverage of the product. Developers and managers need to be aware of that. If troubleshooting and maintenance of automated checks is swamping the testers’ ability to gain critical experience with the product, that’s noteworthy; that work will displace the testers’ opportunities to learn about the product deeply, and perform new experiments on it. Things that slow down testing and make it harder allow deeper and possibly more dangerous bugs to hide and survive.

That’s why it’s important for testers to learn the skills of analyzing and describing the state of the product, the state of the testing, and the quality of the testing—including problems that threaten any of these things. It seems that managers and developers are often unaware of problems of lapsed discipline. Testers shouldn’t be trying manage the project, but they can shine light on the problems.

Obsession with regression testing is a hint that something else might be amiss in the process that leads to it. Sure, it’s a good idea to do some testing after a change. But it’s a lot less expensive to test after a change when people have been testing during the change.

Discipline is a heuristic for reducing the risk of regression and the need for regression testing. When people apply discipline, the effects of change tend to be better known, the code tends to be cleaner, the feedback loops get faster, and the risks tend to be lower—and deep testing can become targeted on the risk, faster, cheaper, and deeper—helping to find hidden problems that matter.

====================

I’m presenting Rapid Software Testing Explored Online November 9-12, timed for North American days and European/UK evenings. You can find more information on the class, and you can register for it.

James Bach teaches in European daytimes December 8-11. Rapid Software Testing Managed is coming too. Find scheduling information for all of our classes.

To Avoid Trouble Successfully, We Must Look For It

September 28th, 2020

Software testing can be socially difficult because of people’s natural desire to avoid trouble. This prompts them to avoid thinking about trouble, which means that they don’t look for it. But if you don’t try to find the trouble that’s in your product, that trouble will eventually find you.

Some might say we do think about trouble, and we try to avoid it by getting clear on our intentions in design work, and by checking our work as we go. Those are fine things to do, but they come with their own problems. In design and planning, we are often unaware of problems that may emerge as we combine elements in a system. Developers are rationally and justifiably resistant to slowing down the pace of their work. Even when we do our best, some problems will elude us.

So when value is at risk, when risk is significant, and when that risk can manifest as real problems that hurt people, deep testing done efficiently is a responsible thing to do—and not doing it means we did not do our best.

A correspondent on LinkedIn, Aaron Emery, asks:

How do you suggest dealing with management that want to ‘shoot the messenger’ in instances like these?

It depends on the management, the message, and the messenger.

Some social awkwardness can come from the message itself and the way it’s frame. “This feature sucks” is probably not as easily digestible as “this behaviour in the product is inconsistent with this requirement noted in the spec” or “…inconsistent with this other part of the product” or “…with what we’ve seen in previous versions of this product” or “…with reasonable desires of this until-now-forgotten user”. Point out the inconsistency dispassionately, and let the receiver of the message come to his or her own feelings about it. In other words: know your oracles.

Another approach is to point out that the message, although momentarily bad news, is offered in order to help make everyone look their best. “Yes; fixing this might take some work, but at least we won’t be inflicting it on customers” — or even “Yes, even though we’re not going to fix this, at least tech support will be prepared for it and can offer a workaround.”

It’s critical for testers to know that the product doesn’t have to look or behave the way we want it to. We don’t design the product, we don’t code it, we don’t sell it, and we don’t run the business. We’re trying to help our testing clients understand the product they’ve got, so that they can decide whether it’s the product they want. So if the client hears us and understands the nature of the product but doesn’t want to fix it, that’s fine—and that’s not shooting the messenger, either. That’s business.

If management says “why are you only telling us about this NOW?”, the reply is “because I only found out about it now. It’s a pity our planning and our coding discipline didn’t prevent this problem, but at least now we can fix it while there’s still time, or learn from this experience.”

If management is truly reckless and wants to suppress awareness of problems, driving the school bus blindfolded, then they probably don’t want your services as a tester. That’s okay too; testing is always optional — and so is your choice of testing clients. You might want to avoid that company’s products in the future, though.

====================

I’m presenting Rapid Software Testing Explored Online November 9-12, timed for North American days and European/UK evenings. You can find more information on the class, and you can register for it.

James Bach teaches in European daytimes December 8-11. Rapid Software Testing Managed is coming too. Find scheduling information for all of our classes.