Blog Posts for the ‘Learning’ Category

Common Languages Ain’t So Common

Tuesday, June 28th, 2011

A friend told me about a payment system he worked on once. In the system models (and in the source code), the person sending notification of a pending payment was the payer. The person who got that notice was called the payee. That person could designate somone else—the recipient—to pick up the money. The transfer agent would credit the account of the recipient, and debit the account of the person who sent notification—the payer, who at that point in the model suddenly became known as the sender. So, to make that clear: the payer sends email to the payee, who receives it. The sender pays money to the recipient (who accepts the payment.) Got that clear? It turns out there was a logical, historical reason for all this. Everything seemed okay at the beginning of the project; there was one entity named “payer” and another named “payee”. Payer A and Payee B exchanged both email and money, until someone realized that B might give someone else, C, the right to pick up the money. Needing another word for C, the development group settled on “recipient”, and then added “sender” to the model for symmetry, even though there was no real way for A to split into two roles as B had. Uh, so far.

There’s a pro-certification argument that keeps coming back to the discussion like raccoons to a garage: the claim that, whatever its flaws, “at least certification training provides us with a common language for testing.” It’s bizarre enough that some people tout this rationalization; it’s even weirder that people accept it without argument. Fortunately, there’s an appropriate and accurate response: No, it doesn’t. The “common language” argument is riddled with problems, several of which on their own would be showstoppers.

  • Which certification training, specifically, gives us a common language for testing? Aren’t there several different certification tribes? Do they all speak the same language? Do they agree, or disagree on the “common language”? What if we believe certification tribes present (at best) a shallow understanding and a shallow description of the ideas that they’re describing?
  • Who is the “us” referred to in the claim? Some might argue that “us” refers to the testing “industry”, but there isn’t one. Testing is practiced in dozens of industries, each with its own contexts, problems, and jargon.
  • Maybe “us” refers to our organization, or our development shop. Yet within our own organization, which testers have attended the training? Of those, has everyone bought into the common language? Have people learned the material for practical purposes, or have they learned it simply to pass the certification exam? Who remembers it after the exam? For how long? Even if they remember it, do they always and everafter use the language that has been taught in the class?
  • While we’re at it, have the programmers attended the classes? The managers? The product owners? Have they bought in too?
  • With that last question still hanging, who within the organization decides how we’ll label things? How does the idea of a universal language for testing fit with the notion of the self-organizing team? Shouldn’t choices about domain-specific terms in domain-specific teams be up to those teams, and specific to those domains?
  • What’s the difference between naming something and knowing something? It’s easy enough to remember a label, but what’s the underlying idea? Terms of art are labels for constructs—categories, concepts, ideas, thought-stuff. What’s in and what’s out with respect to a given category or label? Does a “common language” give us a deep understanding of such things? Please, please have a look at Richard Feynman’s take on differences between naming and knowing,
  • The certification scheme has representatives from over 25 different countries, and must be translated into a roughly equivalent number of languages. Who translates? How good are the translations?
  • What happens when our understanding evolves? Exploratory testing, in some literature, is equated with “ad hoc” testing, or (worse) “error guessing”. In the 1990s, James Bach and Cem Kaner described exploratory testing as “simultaneous test design, test execution, and learning”. In 2006, participants in the Workshop on Heuristic and Exploratory Techniques discussed and elaborated their ideas on exploratory testing. Each contributed a piece to a definition synthesized by Cem Kaner: “Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.” That doesn’t roll off the tonque quite so quickly, but it’s a much more thorough treatment of the idea, identifying exploratory testing as an approach, a way that you do something, rather than something that you do. Exploratory work is going on all the time in any kind of complex cognitive activity, and our understanding of the work and of exploration itself evolves (as we’ve pointed out here, and here, and here, and here, and here.). Just as everyday, general-purpose languages adopt new words and ideas, so do the languages that we use in our crafts, in our communities, and with our clients.

In software development, we’re always solving new problems. Those new problems may involve people to work with entirely new technological or business domains, or to bridge existing domains with new interactions and new relationships. What happens when people don’t have a common language for testing, or for anything else in that kind of development process? Answer: they work it out. As Peter Galison notes in his work on trading zones, “Cultures in interaction frequently establish contact languages, systems of discourse that can vary from the most function-specific jargons, through semispecific pidgins, to full-fledged creoles rich enough to support activities as complex as poetry and metalinguistic reflection.”  Each person in a development group brings elements of his or her culture along for the ride; each project community develops its own culture and its own language.

Yes, we do need common languages for testing (note the plural), but that commonality should be local, not global. Anthropology shows us that meaningful language develops organically when people gather for a common purpose in a particular context. Just as we need testing that is specific to a given context, we need terms that are that way too. So instead of focusing training on memorizing glossary entries, let’s teach testers more about the relationships between words and ideas. Let’s challenge each other to speak and to listen precisely, and to ask better questions about the language we’re using, and to be wary of how words might be fooling us. And let us, like responsible professionals and thinking human beings, develop and refine language as it suits our purposes as we interact with our colleagues—which means rejecting the idea of having canned, context-free, and deeply problematic language imposed upon us.

Follow-up, 2014-08-24: This post has been slightly edited to respond to a troubling fact: the certificationists have transmogrified into the standardisers. Here’s a petition where you can add your voice to stop this egregious rent-seeking.

Exploratory Testing is All Around You

Monday, May 16th, 2011

I regularly converse with people who say they want to introduce exploratory testing in their organization. They say that up until now, they’ve only used a scripted approach.

I reply that exploratory testing is already going on all the time at your organization.  It’s just that no one notices, perhaps because they call it

  • “review”, or
  • “designing scripts”, or
  • “getting ready to test”, or
  • “investigating a bug”, or
  • “working around a problem in the script”, or
  • “retesting around the bug fix”, or
  • “going off the script, just for a moment”, or
  • “realizing the significance of what a programmer said in the hallway, and trying it out on the system”, or
  • “pausing for a second to look something up”, or
  • “test-driven development”, or
  • “Hey, watch this!”, or
  • “I’m learning how to use the product”, or
  • “I’m shaking out it a bit”, or
  • “Wait, let’s do this test first instead of that test”, or
  • “Hey, I wonder what would happen if…”, or
  • “Is that really the right phone number?”, or
  • “Bag it, let’s just play around for a while”, or
  • “How come what the script says and what the programmer says and what the spec says are all different from each other?”, or
  • “Geez, this feature is too broken to make further testing worthwhile; I’m going to go to talk to the programmer”, or
  • “I’m training that new tester in how to use this product”, or
  • “You know, we could automate that; let’s try to write a quickie Perl script right now”, or
  • “Sure, I can test that…just gimme a sec”, or
  • “Wow… that looks like it could be a problem; I think I’ll write a quick note about that to remind me to talk to my test lead”, or
  • “Jimmy, I’m confused… could you help me interpret what’s going on on this screen?”, or
  • “Why are we always using ‘tester’ as the login account? Let’s try ‘tester2’ today”, or
  • “Hey, I could cancel this dialog and bring it up again and cancel it again and bring it up again”, or
  • “Cool! The return value for each call in this library is the round-trip transaction time—and look at these four transactions that took thirty times longer than average!”, or
  • “Holy frijoles! It blew up! I wonder if I can make it blow up even worse!”, or
  • “Let’s install this and see how it works”, or
  • “Weird… that’s not what the Help file says”, or
  • “That could be a cool tool; I’m going to try it when I get home”, or
  • “I’m sitting with a new tester, helping her to learn the product”, or (and this is the big one)
  • “I’m preparing a test script.”

Now it’s possible that none of that stuff ever happens in your organization. Or maybe people aren’t paying attention or don’t know how to observe testing. Or both.

Then, just before I posted this blog entry, James Bach offered me two more sure-fire clues that people are doing exploratory testing: they say, “I am in no way doing exploratory testing”, or “we’re doing only highly rigorous formal testing”. In both cases, the emphatic nature of the claim guarantees that the claimant is not sufficiently observant about testing to realize that exploratory testing is happening all around them.

Update, October 12, 2015: In fact, in the Rapid Software Testing namespace, we now maintain it’s redundant to say “exploratory testing”, in the same way it’s redundant to say “carbon-based human” or “vegetarian potato”. It is formal scripting—not exploration‐that is the interloper on testing’s territory. We explain that here.

Why Do Some Testers Find The Critical Problems?

Saturday, February 5th, 2011

Today, someone on Twitter pointed to an interesting blog post by Alan Page of Microsoft. He says:

“How do testers determine if a bug is a bug anyone would care about vs. a bug that directly impacts quality (or the customers perception of quality)? (or something in between?) Of course, testers should report anything that may annoy a user, but learning to differentiate between an ‘it could be better’ bug and a ‘oh-my-gosh-fix-this’ bug is a skill that some testers seem to learn slowly. … “So what is it that makes some testers zero in on critical issues, while others get lost in the weeds?”

I believe I have some answers to this. My answers are based on roughly 20 years of observation and experience in consulting, training, and working with other testers. The forms of interaction have included in-class training; online coaching via video, voice, and text; face-to-face conversation in workplaces, conferences, and workshops; direct collaboration with other working testers in mass-market commercial software, financial services, retail services, specialized mathematical applications, and several other domains.

My first answer is that testing, for a long time and in many places, has been myopically focused on functional correctness, rather than on value to people. Cem Kaner discusses this issue in his talk Software Testing as a Social Science, and later variations on it. This problem in testing is a subset of a larger problem in computer science and software engineering. Introductory texts often observe that a computer program is “a set of instructions for a computer”. Kaner’s definition of a computer program as “a communication among several humans and computers, distributed over distance and time, that contains instructions that can be executed a computer” goes some distance towards addressing the problem; his explication that “the point of the program is to provide value to the stakeholders” goes further still. When the definition of programming is reduced to producing “a set of instructions for a computer”, it misses the point—value to people—and when testing is reduced to the checking of those instructions, the “testing” will miss the same point. I’ve suggested in recent talks that testing is “the investigation of systems composed of people, computer programs, related products and services.” Successful testers avoid a fascination with functional correctness, and focus on ways in which people might obtain value from a program—or have their value unfulfilled or threatened.

This first answer gives rise to my second: that when testing is focused on functional correctness, it becomes a confirmatory, verification-oriented task, rather than an exploratory, discovery-oriented set of processes. This is not a new problem. It’s old enough that Glenford Myers tried (more or less unsuccessfully, it seems) to argue against it in The Art of Software Testing in 1979. Myers’ point was the testing should be premised on trying to expose the program’s failures, rather than on trying to confirm that it works. Psychological research before and since Myers’ book (in particular Klayman and Ha’s paper on confirmation bias) shows that the positive test heuristic biases people towards choosing tests that demonstrate fit with a working hypothesis (showing THAT it works), rather than tests that drive towards final rule discovery (showing how it works, and more important, how it might fail). Worse yet, I’ve heard numerous reports of development and test managers urging testers to “make sure the tests pass”. The trouble with passing tests is that they don’t expose threats to value. Every function in the program code might be checked and found correct, but the product might be unusable. As in Alan’s example, the phone might make calls perfectly, but unless we model the way people actually use the product—talking for more than three minutes at a time, say—we will miss important problems. Every function might work perfectly, but we might fail to observe missing functionality. Every function might work perfectly, but we might miss terrible compatibility problems. Functional correctness is a very important thing in computer software, but it’s not the only thing. (See the “Quality Criteria” section of the Heuristic Test Strategy Model for suggestions.) Testers “who zero in critical issues” avoid the confirmation trap.

My third answer (related to the first two) is that when testing is focused on confirming functional correctness, a lot of other information gets left lying on the table. Testing becomes a search for finding errors, rather than on finding issues. That is, testers become oriented towards reporting bugs, and less oriented towards the discovery of issues—things that aren’t bugs, necessarily, but that threaten the value of testing and of the project generally. I’ve written recently about issues here. Successful testers recognize issues that represent obstacles to their missions and strategies, and work around them or seek help.

My fourth answer is that many (in my unscientific sample, most) testers are poorly versed in the skills of test framing. This is understandable, at least in part because test framing itself wasn’t known by that name as recently as a year ago as I write. Test framing is the set of logical connections that structure and inform a test. It involves the capacity to follow and express a line of perhaps informal yet reasonably structured logic that directly links the testing mission to the tests and their results. In my experience, most testers are unable to trace this logical line quickly and expertly. There are many roots for this problem. The earlier answers above provide part of the explanation; the mission of value to the customer is overwhelmed by the mission of proving functional correctness. In situations where the process of test design is separated from test execution (as in environments that take a highly scripted approach to testing), the steps to perform the test and observe the results are typically listed explicitly, but the motivation for performing the test is often left out. In situations where test execution, observation of outcomes, and reporting of test results is heavily delegated to automation, motivation is even further disconnected from the mission. In such environments, focus is directed towards getting the automation to follow a script, rather using than automation to assist in probing for problems. In such environments, focus is often on the quantity of tests or the quantity of bug reports, rather than on the quality, the value, of the information revealed by testing. Testers who find problems successfully can link tests, test activities, and test results to the mission. They’re far more concerned about the quality of the information they provide than the quantity.

My fifth answer is that in many organizations there is insufficient diversity of tester skills, mindsets, and approaches for finding the great diversity of problems that might lurk in the product. This problem starts in various ways. In some organizations, testers are drawn exclusively from the business. In others, testers are required to have programming skills before they can be considered for the job. And then things get left out. Testers who need training or experience in the business domain don’t get it, and are kept separated from the business people (that’s a classic example of an issue). Testers aren’t given training in software design, programming, or related skills. They’re not given training in testing, problem reporting and bug advocacy, design of experiments. They’re not given training or education in anthropology, critical thinking, systems thinking, or philosophy and other disciplines that inform excellent testing. Successful testers tend to take on diversified skills, knowledge, and tactics, and when those skills are lacking, they collaborate with people who have them.

Note that I’m not suggesting here that anyone become a Donald Knuth-level programmer, a Pierre Bourdieu-league anthropologist, a Ross Ashby-class systems thinker, a Wittgenstein-grade philosopher. I am suggesting that testers be given sufficient training and opportunity to learn to program to the level of Brian Marick’s Everyday Scripting with Ruby, and that they be given classes, experience, and challenges in observation, the business domain, systems thinking and critical thinking. I am suggesting that people who are testing computer software do need some exposure to core ideas about logic (if we see this, can we justifiably infer that?), about ontology (what are our systems of knowledge about the way things work—especially related to computer programs and to testing), and about epistemology (how do we know what we know?).

I’ve been told by people involved in the design of testing standards that “you can’t expect regular testers to learn epistemology, for goodness’ sake”. Well, I’m saying that we can and that we must at least provide opportunities for learning, to the degree that testers can frame their mission, their ideas about risk, their testing, and their evaluation of the product in the ways that their clients value. Moreover, I’ve worked with testing organizations that have done that, and the results have been impressive. Sometimes I hear people saying “what if we train our testers and they leave?” As one wag on Twitter replied (I wish I knew who), “What if you don’t train them and they stay?”

In our classes, James Bach and I have the experience of inspriring testers to become interested in and excited by these topics. We find that it’s not hard to do that. We remain concerned about the capacity of some organizations to sustain that enthusiasm, often because some middle managers’ misconceptions about the practice and value of testing can squash both enthusiasm and value in a hurry. Testers, to be successful, must be given the freedom and responsibility to explore and to contribute what they’ve learned back to their team and to the rest of the organization.

So, what would we advise?

Read this set of ideas as a system, rather than as a linear list:

  • The purpose of testing is to identify threats to the value of the program. Functional errors are only one kind of threat to the value of the program.
  • Take on expansive ideas about what might constitute—or threaten—the quality of the product.
  • Dynamically manage your focus to exercise the product and test those ideas about value.
  • In hiring, staffing, and training, focus on the mindset and the skill set of the individual tester as a member of a highly diversified team.
  • As an individual tester, develop and diversify your skills and your strategies.
  • Immediately identify report issues that threaten the value of the testing effort and of the project generally. Solve the ones you can; raise team and management awareness of the costs and risks of issues, in order to get attention and help.
  • Learn to frame your testing and to compose, edit, narrate and justify a compelling testing story.
  • Don’t try to control or restrain testers; grant them the freedom—along with the responsibility to discover what they will. Given that… they will.

Context-Free Questions for Testing

Wednesday, November 24th, 2010

In Jerry Weinberg and Don Gause’s Exploring Requirements, there’s a set of context-free questions to ask about a product or service. The authors call them context-free questions, but to me, many of them are more like context-revealing questions.

In the Rapid Software Testing class, the participants and the instructors make discoveries courtesy of our exercises and conversations. Here’s a list of questions that come up fairly consistently, or that we try to encourage people to ask. Whether you’re working with something new or re-evaluating your status, you might find these questions helpful to you as you probe the context of the test project, your givens, and your mission.

I leave it as an exercise for the reader to link these questions to specific points in the Heuristic Test Strategy Model and the Satisfice Context Model.

  • Is it okay if I ask you questions?
  • Who is my client?
  • Are you my only client?
  • Who is the customer of the product?
  • Who are the other stakeholders?
  • What is my mission?
  • What else might be part of my mission?
  • What problems are you aware of that would threaten the value of this product or service?
  • Do you want a quick, practical, or deep answer to the mission or question you have in mind?
  • How much time do I have?
  • How long before the next release or deployment?
  • How long before the end of this testing or development cycle?
  • When do you want reports or answers?

  • How do you want me to provide them? How often?
  • When were you thinking of shipping or deploying this product or service?
  • What else do you want me to deliver?
  • How do you want me to deliver it?
  • This thing I’m testing… could I have it myself, please?
  • Is there another one like it?
  • Are there more than that?
  • Is that all there are?
  • How is this one expected to be the same or different from the other ones?
  • Here’s what I believe I see in front of me. What else could it be?
  • Here’s what I’m thinking right now. What else might be true? What if the opposite were true?
  • Could you describe how it works?
  • Could you draw me a diagram of how it works?
  • How would I recognize a problem?
  • I think I’m seeing a problem. Why do I think it’s a problem? For whom might it be a problem?
  • What does this thing depend upon?
  • What tools or materials were used to construct it?
  • Who built this thing?
  • Can I talk to them?
  • Are they easy to talk to? Helpful?
  • Have they ever built anything like this before?
  • Is there anyone that I should actively avoid?
  • Who else knows something about this?
  • Who’s the best person to ask about this?
  • Who are the local experts in this field?
  • Who are the acknowledged experts, even if they don’t work here?
  • Has anyone else tested this?
  • Can I see their results, please?
  • Who else is on my test team?
  • What skills and competencies are expected of me?
  • What other skills and competencies can be found on the test team? Elsewhere?
  • What skills and competencies might we be lacking?
  • What information is available to me?
  • Is there more information available?
  • Where could I find more information? Is that the last source you can think of?
  • In what other forms could I find information?
  • Is that all the information there is? Is there more? Are there more rules? Requirements? Specifications?
  • If information is in some way wanting, what can I do to help you discover or develop the information you need?
  • What equipment and tools are available to help with my testing?
  • What tools would you like me to build? Expect me to build?
  • Is there some data that is being processed by this thing?
  • Can I have some of that data?
  • Can I have a description of the data’s structures?
  • What are your feelings about this thing?
  • Who might feel differently?
  • How might they feel?
  • What do customers say about it?
  • Can I talk to the technical support people?
  • (How do I feel about this thing?)
  • Who can we trust? Is there anyone that we should distrust?
  • Is there anything that you would like to prohibit me explicitly from doing?
  • Are there any other questions I should be asking you?

Statistician or Journalist?

Friday, August 27th, 2010

Eric Jacobson has a problem, which he thoughtfully relates on his thoughtful blog in a post called “How Can I Tell Users What Testers Did?”. In this post, I’ll try to answer his question, so you might want to read his original post for context.

I see something interesting here: Eric tells a clear story to relate to his readers some problem that he’s having with explaining his work to others who, by his account, don’t seem to understand it well. In that story, he mentions some numbers in passing. Yet the numbers that he presents are incidental to the story, not central to it. On the contrary, in fact: when he uses numbers, he’s using them as examples of how poorly numbers tell the kind of story he wants to tell. Yet he tells a fine story, don’t you think?

In the Rapid Software Testing course, we present this idea (Note to Eric: we’ve added this since you took the class): To test is to compose, edit, narrate, and justify two parallel stories. You must tell a story about the product: how it works, how it fails, and how it might not work in ways that matter to your client (and in the context of a retrospective, you might like to talk about how the product was failing and is now working). But in order to give that story its warrant, you must tell another story: you must tell a story about your testing. In a case like Eric’s, that story would take the form of a summary report focused on two things: what you want to convey to your clients, and what they want to know from you (and, ideally, those two things should be in sync with each other).

To do that, you might like to consider various structures to frame your story. Let’s start with the elements of what we (somewhat whimsically) call The Universal Test Procedure (you can find it in the course notes for the class). From a retrospective view, that would include

  • your model of the test space (that is, what was inside and outside the scope of your testing, and in particular the risks that you were trying to address)
  • the oracles that you used
  • the coverage that you obtained
  • the test techniques you applied
  • the ways in which you configured the product
  • the ways in which you operated the product
  • the ways in which you observed the product
  • the ways in which you evaluated the product; and
  • the heuristics by which you decided to stop testing
  • what you discovered and reported, and how you reported

You might also consider the structures of exploratory testing. Even if your testing isn’t highly exploratory, a lot of the structures have parallels in scripted testing.

Jon Bach says (and I agree) that testing is journalism, so look at the way journalists structure a story: they often start with the classic pyramid lead. They might also start with a compelling anecdote as recounted in What’s Your Story, by Craig Wortmann, or Made to Stick, by Chip and Dan Heath. If you’re in the room with your clients, you can use a whiteboard talk with diagrams, as in Dan Roam’s The Back of the Napkin. At the centre of your story, you could talk about risks that you addressed with your testing; problems that you found and that got addressed; problems that you found and that didn’t get addressed; things that slowed you down as you were testing; effort that you spent in each area; coverage that you obtained. You could provide testimonials from the programmers about the most important problems you found; the assistance that you provided to them to help prevent problems; your contributions to design meetings or bug triage sessions; obstacles that you surmounted; a set of charters that you performed, and the feature areas that they covered. Again, focus on what you want to convey to your clients, and what they want to know from you.

Incidentally, the more often and the more coherently you tell your story, the less explaining you’ll have to do about the general stuff. That means keeping as close to your clients as you can, so that they can observe the story unfolding as it happens. But when you ask “What metric or easily understood information can my test team provide users, to show our contribution to the software we release?”, ask yourself this: “Am I a statistician or a journalist?”

Other resources for telling testing stories:

Thread-Based Test Management: Introducing Thread-Based Test Management, by James Bach; and A New Thread, by Jon Bach (as of this writing, this is brand new stuff)

a video.

Constructing the Quality Story (from Better Software, November 2009): Knowledge doesn’t just exist; we build it. Sometimes we disagree on what we’ve got, and sometimes we disagree on how to get it. Hard as it may be to imagine, the experimental approach itself was once controversial. What can we learn from the disputes of the past? How do we manage skepticism and trust and tell the testing story?

On Metrics:

Three Kinds of Measurement (And Two Ways to Use Them) (from Better Software, July 2009): How do we know what’s going on? We measure. Are software development and testing sciences, subject to the same kind of quantitative measurement that we use in physics? If not, what kinds of measurements should we use? How could we think more usefully about measurement to get maximum value with a minimum of fuss? One thing is for sure: we waste time and effort when we try to obtain six-decimal-place answers to whole-number questions. Unquantifiable doesn’t mean unmeasurable. We measure constantly WITHOUT resorting to numbers. Goldilocks did it.

Issues About Metrics About Bugs (Better Software, May 2009): Managers often use metrics to help make decisions about the state of the product or the quality of the work done by the test group. Yet measurements derived from bug counts can be highly misleading because a “bug” isn’t a tangible, countable thing; it’s a label for some aspect of some relationship between some person and some product, and it’s influenced by when and how we count… and by who is doing the counting.

On Coverage:

Got You Covered (from Better Software, October 2008): Excellent testing starts by questioning the mission. So, the first step when we are seeking to evaluate or enhance the quality of our test coverage is to determine for whom we’re determining coverage, and why.

Cover or Discover (from Better Software, November 2008): Excellent testing isn’t just about covering the “map”—it’s also about exploring the territory, which is the process by which we discover things that the map doesn’t cover.

A Map By Any Other Name (from Better Software, December 2008): A mapping illustrates a relationship between two things. In testing, a map might look like a road map, but it might also look like a list, a chart, a table, or a pile of stories. We can use any of these to help us think about test coverage.

How Can A Trainee Improve His (Her) Skills

Thursday, February 12th, 2009

A blogger on TestRepublic asks “How can a trainee improve his/her skill sets in testing?”

This is what I do. I recommend it to all trainees (or “freshers”, as they say in India).

Find something that interests you, or something that would be useful to you or to a client, or something that you must do, or a problem that you need to solve, or something that you think might be fun. Listen, talk, ask questions, read, write, watch, learn, do, practice, teach, study. Solicit feedback. Practice.

Think critically. Monitor your mental and emotional state. Hang around with people who inspire you on some level. Offer help to them, and ask them for help; more often than not, they’ll provide it. Practice.

Think systematically. Seek the avant garde. Defocus; look elsewhere or do something else for a while.

Practice. Observe the things in your environment; direct your focus to something to which you hadn’t paid attention before. Seek connections with stuff you already know. Look to the traditional. Refocus.

Learn, by practice, to do all of the above at the same time. Tell people about what you’ve discovered, and listen to what they tell you in return. Recognize and embrace how much more you still have to learn. Get used to that; learn to love it. Repeat the cycle continuously.


It’s the same with any skill set. For me, it has worked for testing; it has worked for playing mandolin; it has worked for being a parent—even though there’s a universe of stuff that I still have to learn about all of those things. When I use the approach above, I make progress rapidly. When I don’t, I stall pretty quickly.

My friend and colleague James Bach has a similar approach for living and learning, and he’s written a book about it. It’s called Secrets of a Buccaneer Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success.

These approaches are at the heart of the Rapid Software Testing mindset. They’re also a big part of what we try to teach people by example and by experience in the Rapid Software Testing course. It may sound as though there are lots of bits and pieces to cover—and there are—but they all fit together, and we give you exercises and practice in them to get you started. And these approaches seem to help people and get them inspired.

At conferences or association meetings, we present some of what we’ve learned in a formal way, but we also get up early in the morning and/or hang out in the pub in the evening, chatting with people, playing games, exchanging puzzles, trading testing stories. When we’re on the road, we try to contact other people in our network, and hang out with them. We blog, and we read blogs. We read in the forums, we write in the forums. We seek out passionate people from whom we can learn and whom we can teach. We point people to books and resources that we think would assist them in their quests to develop skill, and ask them to do the same for us. As a novice, you can do almost all of this stuff right away, and make goals of whatever is left.

In addition to Rapid Software Testing, one of the places that we regularly point new testers is the Black Box Software Testing course, available free for self-study at, or in an instructor-led version from the Association for Software Testing. That course, co-authored by Cem Kaner and James Bach, but increasingly refined by collaboration between authors, instructors, and students, will give you lots of excellent knowledge and techniques and exercises.

The skill part—that comes with practice, and that’s up to you.