Blog Posts for the ‘Certification’ Category

Very Short Blog Posts (23) – No Certification? No Problem!

Wednesday, January 28th, 2015

Another testing meetup, and another remark from a tester that hiring managers and recruiters won’t call her for an interview unless she has an ISEB or ISTQB certification. “They filter résumés based on whether you have the certification!” Actually, people probably go to even less effort than that; they more likely get a machine to search for a string of characters. So if you’re looking for a testing job, you don’t have a certification, and you have no interest in paying an employment tax to the rent-seekers here’s one way to get around the filter. At the bottom of your CV, add this sentence:

I do not have an ISEB or ISTQB certification, and I would be pleased to explain why.

An automated filter will put your résumé on the “read it” pile. Then the sentence should attract the attention of anyone who bothers to read it and who is genuinely interested in hiring a tester, at which point they’ll start looking at your real qualifications. And if they don’t contact you, you probably don’t want to work with them anyway.

A Response to Anne Mette Hass

Saturday, September 20th, 2014

In response to my earlier blog post, I received this comment from Anne Mette Hass. I’m going to reproduce it in its entirety, and then I’ll respond to each point.

I think this ‘war’ against the ISO standard is so sad. Nobody has set out to or want to harm anybody else with this work. The more variation we have in viewpoints and the more civilized be can debate, the wiser we get as times go by.

There is no way the standard ever wanted to become or ever will become a ‘kalifat’.

So why are you ‘wasting’ so much time and energy on this? What are you afraid of?

Best regards,
Anne Mette

PS. I’m likely not to answer if you answer me – I have better things to do.

And now my response:

Anne Mette,

It may surprise you to hear that I agree with many of the conclusions that you’ve given here. The trouble is that I don’t agree with your premises.

I think this ‘war’ against the ISO standard is so sad.

I agree that it’s sad.

It’s sad that a small group of people and/or organizations have decided unilaterally to proclaim an “internationally-recognised and agreed standard”, hiding behind ISO processes and implicitly claiming it to be based on consensus of the affected stakeholders, when it is manifestly not.

It’s sad that the working group has proceeded to declare a “standard” when the convenor of the working group has admitted that it has no evidence of efficacy. Those who claim to be expert testers would raise the alarm about an inefficacious product, would gather evidence, and would investigate. The “standard” has not been tested with actual application in the field.

It’s sad when the convenor of the working group treats “craftsmen” as a word with a negative connotation.

It’s sad that those people producing the “standard” have so stubbornly and aggressively ignored the breadth of ideas in the craft of testing, preferring to adopt a simplistic and shallow syllabus that was developed through a similar process. (“The ISO 29119 is based on the ISTQB syllabi, and, as far as I understand, it is the intention that the ISO 29119 testing process and testing documentation definitions will be adopted by ISTQB over time.” —Anne Mette Hass, https://www.facebook.com/permalink.php?story_fbid=10152731672549009&id=144926394008)

It’s sad that members of the Working Group would issue blatantly contradictory and false statements about their intentions (“There is also no link between the ISO/IEC/IEEE Testing Standards and the ISTQB tester certification scheme.” —Stuart Reid, http://softwaretestingstandard.org/29119petitionresponse.php).

It is sad that ISO’s reputation stands a chance of being tarnished by this debacle. It’s not that the opponents of 29119 are opposed to standards. There is a place for standards in physical things that need to be interoperable. There is a place for standardization in communication protocols. There is no place for standardization of testing when the business of technology development requires variation and deviation from the norm.

War is a predictable response when people attempt to game political systems and invade territories. Wars happen when one group of people imposes its beliefs and its way of life over another group of people. War is what happens when politics fails. And war is always sad.

Nobody has set out to or want to harm anybody else with this work.

I’m aware of several people who worked on the ISTQB certification scheme. They entered into that with good will and the best of intentions, hoping to contribute new ideas, alternative views, and helpful critique. They have reported, both publicly and privately, that their contributions were routinely rejected or ignored until eventually they gave up in frustration. This was a pattern that carried over from the Software Engineering Body of Knowledge, the CMM, IEEE 829 and other standards.

Some people have asked “If you don’t like the way the standard turned out, why didn’t you get involved with the development of the standard?” This is the equivalent of saying “If you don’t like where the hijacked plane landed, why didn’t you put on a mask and join us when we stormed the cockpit?”

Even for people of good will who stuck with the effort, the road to hell is paved with good intentions. Whatever your motives, it is important to consider reasonably foreseeable consequences of your actions. On Twitter, Laurent Bossavit has said “ISO 29119 may be software testing’s Laetrile. Never proven, toxic side effects, sold as ‘better than nothing’ to the desperate and unwary. And I do mean ‘sold’, at $200 each chapter. That’s de rigueur when selling miracle cures. (More on Laetrile: http://t.co/R6J9V6OMIs)” Similarly, I’m sure that Jenny McCarthy meant to harm no one with her ill-informed and incorrect claims that vaccinations caused autism. But Jenny McCarthy is neither doctor, nor scientist, nor tester.

Here’s what we’ve seen from the community’s experience with the ISTQB: whatever good intentions anyone might have had, a lot of money has been transferred from organizations and individuals into the pockets of people who have commercialized the certification. Testers have been not only threatened with unemployment, but prevented from working unless or until they have become certified. And ISO 29119 plows the soil for another crop of certification schemes.

The more variation we have in viewpoints and the more civilized be can debate, the wiser we get as times go by.

I agree with that too. I’m all for variation in viewpoints. The trouble is, by definition, the purpose of a standard is to suppress variation. That’s what makes it “standard”. I, for one, would enthusiastically join a civilized debate (please inform me of any point at which you believe my discourse in this matter has been uncivilized), but it appears that you are dismissing the idea out of hand: I refer readers to your postscript.

There is no way the standard ever wanted to become or ever will become a ‘kalifat’.

I agree there too. The “standard” doesn’t want anything. My concern is about the people who develop and promote the standard‐what they want.

So why are you ‘wasting’ so much time and energy on this? What are you afraid of?

I don’t think that anyone who is opposing the “standard” is wasting time at all. I’m a tester. It’s my job and my passion. When someone is attempting to claim a “standard” approach to my craft, I’m disposed to investigate the claim. Notice that the opponents of the “standard” are the ones doing vigourous investigative work and looking for bugs; the development team is showing no sign of doing it. When you ask “why are you wasting so much time and energy on this?” it reminds me of a developer who doesn’t believe that his product should be tested.

I’m not worried about the “standard” becoming a caliphate; I’m concerned about it becoming anything more than a momentary distraction. And I’m not afraid; I’m anticipating and I’m protesting. Specifically, I’m anticipating and protesting

  • another round of dealing with uninformed managers and human resource people requiring candidates to have experience in or credentials for yet another superficial body of “knowledge”;

  • another round of bogus certification schemes that pick the pockets of naïve or vulnerable testers, especially in developing countries;

  • another several years of having to persuade innocent managers that intellectual work cannot and should not be “standardised”, turned into bureaucracy and paperwork;

  • another several years of explaining that, despite what some “standard” says, a linear process model for testing (even one that weasels out and says that some iteration may occur) is deeply flawed and farcical;

  • the gradual drift of the “voluntary” “standard” into mandatory compliance, as noted by the National Institute of Standards and Technology (the second last paragraph here) and as helpfully offered by ISO.

  • waste associated with having to decide whether to follow given points in the standard or reject them. (Colleagues who have counted report that there are over 100 points at which a “standard-compliant” organization must identify its intention to follow or deviate from the “standard”. That’s overhead and extra effort for any organization that wants simply to do a good job of testing on its own terms.)

  • goal displacement as organizations orient themselves towards complying to the letter of the standard rather than, say, testing to help make sure that their products don’t fail, or harm people, or kill people.

Best regards,
Anne Mette

PS. I’m likely not to answer if you answer me – I have better things to do.

Since Anne Mette evinces no intention of responding, I will now address the wider community.

There’s an example of a response from the 29119 crowd, folks. This one makes no attempt to address any of the points raised in my post; presents not a single reasoned argument; nor any supporting evidence. Mind, you don’t need supporting evidence when you don’t present an argument. But at least we get a haughty dismissal from someone who has “better things to do” than to defend the quality of the work.

Rising Against the Rent-Seekers

Monday, August 25th, 2014

At CAST 2014, a quiet, modest, thoughtful, and very experienced man named James Christie gave a talk called “Standards: Promoting Quality or Restricting Competition?”. The talk followed on from his tutorial at EuroSTAR 2013 on working with auditors—James is a former auditor himself—and from his blogs on software standards over the years.

James’ talk introduced to our community the term rent-seeking. Rent-seeking is the act of using political means—the exercise of power—to obtain wealth without creating wealth; see http://www.econlib.org/library/Enc/RentSeeking.html and http://en.wikipedia.org/wiki/Rent-seeking. One form of rent-seeking is using regulations or standards in order to create or manipulate a market for consulting, training, and certification.

James’ CAST presentation galvanized several people in attendance to respond to ISO Standard 29119, the most recent rent-seeking scheme by a very persistent group of certificationists and standards promoters. Since the ISO standard on standards requires—at least in theory—consensus from industry experts, some people proposed a petition to demonstrate opposition and the absence of consensus amongst skilled testers. I have signed this petition, and I urge you to read it, and, if you agree, to sign it too.

Subsequently, a publication named Professional Tester published—under an anonymous byline—a post about the petition, with the provocative title “Book burners threaten (old) new testing standard”. Presumably such (literally) inflammatory language was meant as clickbait. Ordinarily such things would do little to foster thoughtful discussion about the issues, but it prompted some quite thoughtful reactions. Here’s one example; here’s another. Meanwhile, if the author wishes to characterize me as a book burner, here are (selected) contents of my library relevant to software testing. Even the lamest testing books (and some are mighty lame) have yet to be incinerated.

In the body text, the anonymous author mischaracterises the petition and its proponents, of which I am one. “Their objection,” (s)he says, “is that not everyone will agree with what the standard says: on that criterion nothing would ever be published.” I might not agree with what the standard says, but that’s mostly a side issue for the purposes of this post. I disagree with what the authors of the standard attempt to do with it.

1) To prescribe expensive, time-consuming, and wasteful focus on bloated process models and excessive documentation. My concern here is that organizations and institutions will engage in goal displacement: expending money, time and resources on demonstrating compliance with the standard, rather than on actually testing their products and services. Any kind of work presents opportunity cost; when you’re doing something, most of the time it prevents you from doing something else. Every minute that a tester spends on wasteful documentation is a minute that the tester cannot fulfill the overarching mission of testing: learning about the product, with an emphasis on discovering important problems that threaten value or safety, so that our clients can make informed decisions about problems and risks.

I am not objecting here to documentation, as the calumny from Professional Tester suggests. I am objecting to excessive and wasteful documentation. Ironically, the standard itself provides an example: the current version of ISO 29119-1 runs to 64 pages; 29119-2 has 68 pages; and 29119-3 has 138 pages. If those pages follow the pattern of earlier drafts, or of most other ISO documents, you have a long, pointless, and sleep-inducing read ahead of you. Want a summary model of the testing process? Try this example of what the rent-seekers propose as their model of of testing work. Note the model’s similarity to that of a (overly complex and poorly architected) computer program.

2) To set up an unnecessary market for training, certification, and consultancy in interpreting and applying the standard. The primary tactic here is to instill the fear of being de-certified. We’ve been here before, as shown in this post from Tom DeMarco (date uncertain, but it seems to have been written prior to 2000).

Rent-seeking is of the essence, and we’ve been here before in another sense: this was one of the key goals of the promulgators of the ISEB and ISTQB. In the image, they’ve saved the best for last.

The well-informed reader will note that the list of organizations behind those schemes and the members of the ISO 29119 international working group look strikingly similar.

If the working group happens to produce a massive and opaque set of documents, and you’re in an environment that claims conformance to the 29119 standards, and you want to get some actual testing work done, you’ll probably find it helpful to hire a consultant to help you understand them, or to help defend you from charges that you were not following the standard. Maybe you’ll want training and certification in interpreting the standard—services that the authors’ consultancies are primed to offer, with extra credibility because they wrote the standards! Good thing there are no ethical dilemmas around all of this.

3) To use the ISO’s standards development process to help suppress dissent. If you want to be on the international working group, it’s a commitment to six days of non-revenue work, somewhere in the world, twice a year. The ISO/IEC does not pay for travel expenses. Where have international working group meetings been held? According to the http://softwaretestingstandard.org/ Web site, meetings seem to have been held in Seoul, South Korea (2008); Hyderabad, India (2009); Niigata, Japan (2010); Mumbai, India (2011); Seoul, South Korea (2012); Wellington New Zealand (2013). Ask yourself these questions:

  • How many independent testers or testing consultants from Europe or North America have that kind of travel budget?

  • What kinds of consultants might be more likely to obtain funding for this kind of travel?

  • Who benefits from the creation of a standard whose opacity demands a consultant to interpret or to certify?

Meanwhile, if you join one of the local working groups, there are two ways that the group arrives at consensus.

  • By reaching broad agreement on the content. (Consensus, by the way, does not mean unanimity—that everyone agrees with the the content. It would be closer to say that in a consensus-based decision-making process, everyone agrees that they can live with the content.) But, if you can’t get to that, there’s another strategy.

  • By attrition. If your interest is in promulgating an unwieldy and opaque standard, there will probably be objectors. When there are, wait them out until they get frustrated enough to leave the decision-making process. Alan Richardson describes his experience with ISEB in this way.

In light of that, ask yourself these questions:

  • How many independent consultants have the time and energy to attend local working groups, often during otherwise billable hours?

  • What kinds of consultants might be more likely to support attendance at local working groups?

  • Who benefits from the creation of a standard that needs a consultant to interpret or to certify?

4) To undermine the role of skill in testing, and the reputations of people who discuss and promote it. “The real reason the book burners want to suppress it is that they don’t want there to be any standards at all,” says the polemicist from Professional Tester. I do want there to be standards for widgets and for communication protocols, but not for complex, cognitive, context-sensitive intellectual work. There should be standards for designed things that are intended to work together, but I’m not at all sure there should be mandated standards for how to do design. S/he goes on: “Effective, generic, documented systematic testing processes and methods impact their ability to depict testing as a mystic art and themselves as its gurus.” Far from treating testing as a mystic art, appealing to things like “intuition” and “experienced-based techniques”, my community has been trying to get to the heart of testing skills, flexible and responsive coverage reporting, tacit and explict knowledge, and the premises of the way we do testing. I’ve seen no such effort to dig deeper into these subjects—and to demystify them—from the rent-seekers.

Unlike the anonymous author at Professional Tester, I am willing to stand behind my work, my opinions, and my reputation by signing my name and encouraging comments. Feel free.

—Michael B.

What Do You Mean By “Arguing Over Semantics”?

Wednesday, April 3rd, 2013

Commenting on testing and checking, one correspondent responds:

“To be honest, I don’t care what these types of verification are called be it automated checking or manual testing or ministry of John Cleese walks. What I would like to see is investment and respect being paid to testing as a profession rather than arguing with ourselves over semantics.”

My very first job in software development was as a database programmer at a personnel agency. Many times I wrote a bit of new code, I got a reality check: the computer always did exactly what I said, and not necessarily what I meant. The difference was something that I experienced as a bug. Sometimes the things that I told the computer were consistent with what I meant to tell it, but the way I understood something and the way my clients understood something was different. In that case, the difference was something that my clients experienced as a bug, even though I didn’t, at first. The issue was usually that my clients and I didn’t agree on what we said or what we meant. That wasn’t out of ignorance or ill-will. The problem was often that my clients and I had shallow agreement on a concept. A big part of the job was refining our words for things—and when we did that, we often found that the conversation refined our ideas about things too. Those revelations (Eric Evans calls them “knowledge crunching”) are part of the process of software development.

As the only person on my development team, I was also responsible for preparing end-user documentation for the program. My spelling and grammar could be impeccable, and spelling and grammar checkers could check my words for syntactic correctness. When my description of how to use the program was vague, inaccurate, or imprecise, the agents who used the application would get confused, or would make mistakes, or would miss out on something important. There was a real risk that the company’s clients wouldn’t get the candidates they wanted, or that some qualified person wouldn’t get a shot at a job. Being unclear had real consequences for real people.

A few years later, my friend Dan Spear—at the time, Quaterdeck’s chief scientist, and formerly the principal programmer of QEMM-386—accepted my request for some lessons in assembly language programming. He began the first lesson while we were both sitting back from the keyboard. “Programming a computer,” he began, “is the most humbling thing that you can do. The computer is like a mirror. It does exactly what you tell it to do, and in doing that, it reflects any sloppiness in your thinking or in your way of expressing yourself.”

I was a program manager (a technical position) for the company for four years. Towards the end of my tenure, we began working on an antivirus product. One of the product managers (“product manager” was a marketing position) wanted to put a badge on the retail box: “24 hour support response time!” In a team meeting, we technical people made it clear that we didn’t provide 24-hour monitoring of our support channels. The company’s senior management clearly had no intention of staffing or funding 24-hour support, either. We were in Los Angeles, and the product was developed in Israel. It took development time—sometimes hours, but sometimes days—to analyse a virus and figure out ways to detect and to eradicate it. Nonetheless, the marketing guy (let’s call him Mark) continued to insist that that’s what he wanted to put on the box. One of the programming liaisons (let’s call him Paul) spoke first:

Paul: “I doubt that some of the problems we’re going to see can be turned around in 24 hours. Polymorphic viruses can be tricky to identify and pin down. So what do you mean by 24-hour response time?”

Mark: “Well, we’ll respond within 24 hours.”

Paul: “With a fix?”

Mark: “Not necessarily, but with a response.”

Paul: “With a promise of a fix ? A schedule for a fix?”

Mark: “Not necessarily, but we will respond.”

Paul: “What does it mean to respond?”

Mark: “When someone calls in, we’ll answer the phone.”

Sam (a support person): “We don’t have people here on the weekends.”

Mark: “Well, 24 hours during the week.”

Sam: “We don’t have people here before 7:00am, or after 5:00pm.”

Mark: “Well… we’ll put someone to check voicemail as soon as they get in… and, on the weekends… I don’t know… maybe we can get someone assigned to check voicemail on the weekend too, and they can… maybe, uh… send an email to Israel. And then they can turn it around.”

At this point, as the program manager for the product, I’d had enough. I took a deep breath, and said, “Mark, if you put ’24-hour response time’ on the box, I will guarantee that that will mislead some people. And if we mislead people to take advantage of them, knowing that we’re doing it, we’re lying. And if they give us money because of a lie we’re telling, we’re also stealing. I don’t think our CEO wants to be the CEO of a lying, stealing company.”

There’s a common thread that runs through these stories: they’re about what we say, about what we mean, and about whether we say what we mean and mean what we say. That’s semantics: the relationships between words and meaning. Those relationships are central to testing work.

If you feel yourself tempted to object to something by saying “We’re arguing about semantics,” try a macro expansion: “We’re arguing about what we mean by the words we’re choosing,” which can then be shortened to “We’re arguing about what we mean.” If we can’t settle on the premises of a conversation, we’re going to have an awfully hard time agreeing on conclusions.

I’ll have more to say on this tomorrow.

Smoke Testing vs. Sanity Testing: What You Really Need to Know

Tuesday, November 8th, 2011

If you spend any time in forums in which new testers can be found, it won’t be long before someone asks “”What is the difference between smoke testing and sanity testing?”

“What is the difference between smoke testing and sanity testing?” is a unicorn question. That is, it’s a question that shouldn’t be answered except perhaps by questioning the question: Why does it matter to you? Who’s asking you? What would you do if I gave you an answer? Why should you trust my answer, rather than someone else’s? Have you looked it up on Google? What happens if people on Google disagree?

But if you persist and continue to ask me, here’s what I will tell you:

The distinction between the smoke and sanity testing is not generally important. In fact, it’s one of the most trivial aspects of testing that I can think of, offhand. Yet it does point to something that is important.

Both smoke testing and sanity testing refer to a first-pass, shallow form of testing intended to establish whether a product or system can perform the most basic functions. Some people call such testing “smoke testing”; others call it “sanity testing”. “Smoke testing” derives from the hardware world; if you create an electronic circuit, power it up, and smoke comes out somewhere, the smoke test has failed. Sanity testing has no particular derivation that I’m aware of, other than the common dictionary definition of the word “sanity”. Does the product behave in some crazy fashion? If so, it has failed the sanity test.

Do you see the similarity between these two forms of testing? Can you make a meaningful distinction between them? Maybe someone can. If so, let them make it. If you’re talking to some person, and that person want to make a big deal about the distinction, go with it. Some organizations make a distinction between the smoke and sanity testing; some don’t. If it seems important in your workplace, then ask in your workplace, and adapt your thinking accordingly while you’re there. If it’s important that you provide a “correct” answer on someone’s idiotic certification exam, give them the answer they want according to their “body of knowledge”. Otherwise, it’s not important. Don’t worry about it.

Here’s what is important: wherever you find yourself in your testing career, people will use language that has evolved as part of the culture of that organization. Some consultancies or certification mills or standards bodies claim the goal of providing “a common worldwide standard language for testing”. This is as fruitless and as pointless a goal as a common worldwide standard language for humanity. Throughout all of human history history, people have developed different languages to address things that were important in their cultures and societies and environments. Those languages continue to develop as change happens. This is not a bad thing. This is a good thing.

There is no term in testing of which I am aware whose meaning is universally understood and accepted. There’s nothing either wrong or unusual about that. It’s largely true outside the testing world too. Pick an English word at random, and odds are you’ll find multiple meanings for it. Examples:

  • Pick (choose, plectrum for a guitar)
  • English (a language, spin on a billiard ball)
  • word (a unit of speech, a 32-bit value)
  • random (without a definite path, of equal probability)
  • odds (probability, numbers not divisible by two)
  • multiple (more than one, divisible by)
  • meaning (interpretation, significance)

Never mind the shades and nuances of interpretation within each meaning of each word! And notice that “never mind”, in this context, is being used ironically. Here, “never mind” doesn’t mean “forget” or “ignore”; here, it really means the opposite: “also pay attention to”!

Not only is there no universally accepted term for anything, there’s no universally accepted authority that could authoritatively declare or enforce a given meaning for all time. (Some might point to law, claiming that there are specific terms which have solid interpretations. If that were true, we wouldn’t need courts or lawyers.)

If you find yourself in conversation (or in an interview) with someone who asks you “Do you do X?”, and you’re not sure what X is by their definition, a smart and pragmatic reply starts with, “I may do X, but not necessarily by that name.” After that,

  • You can offer to describe your notion of X (if you have one).
  • You can describe something that you do that could be interpreted as X. That can be risky, so offer this too: “Since I don’t know what you mean by X, here’s something that I do. I think it sounds similar to X, or could be interpreted as X. But I’d like to make sure that we both recognize that we could have different interpretations of what X means.”
  • You can say, “I’d like to avoid the possibility that we might be talking at cross-purposes. If you can describe what X means to you, I can tell you about my experiences doing similar things, if I’ve done them. What does X mean to you?” Upon hearing their definition of X, then truthfully describe your experience, or say that you haven’t done it.

If you searched online for an answer to the smoke vs. sanity question, you’d find dozens, hundreds of answers from dozens, hundreds of people. (Ironically, the very post that introduces the notion of the unicorn question includes, in the second-to-last paragraph, a description of a smoke test. Or a sanity test. Whatever.) The people who answer the smoke vs. sanity question don’t agree, and neither do their answers. Yet many, even most, of the people will seem very sure of their own answers. People will have their own firm ideas about how many angels can fit on the head of a pin, too. However, there is no “correct” definition for either term outside of a specific context, since there is no authority that is univerally accepted. If someone claimed to be a universally accepted authority, I’d reject the claim, which would put an instant end to the claim of universal acceptance.

With the possibile exception of the skills of memorization, there is no testing skill involved in memorizing someone’s term for something. Terms and their meanings are slippery, indistinct, controversial, and context-dependent. The real testing skill is in learning to deal with the risk of ambiguity and miscommunication, and the power of expressing ourselves in many ways.

xMMwhy

Friday, October 28th, 2011

Several years ago, I worked for a few weeks as a tester on a big retail project. The project was spectacularly mismanaged, already a year behind schedule by the time I arrived. Just before I left, the oft-revised target date slipped by another three months. Three months later, the project was deployed, then pulled out of production for another six months to be fixed. Project managers and a CIO, among many others, lost their jobs. The company pinned an eight-figure loss on the project.

The software infrastructure was supplied by a big database company, and the software to glue everything together was supplied by development organization in another country. That software was an embarrassment—bloated, incoherent, hard to use, and buggy. Fixes were rarely complete and often introduced new bugs. At one point during my short tenure, all effective worked stopped for five days because the development organization’s servers crashed and no backups were available. All this despite the fact that the software development company claimed CMMI Level 5.

This morning, I was greeted by a Tweet that said

“Deloittes show how a level 5 CMMi company has bad test process at #TMMi conf in Korea! So CMMi needs TMMi – good.”

The TMMi is the Testing Maturity Model Integration. Here’s what the TMMi Foundation says about it:

“The Test Maturity Model Integration has been developed to complement the existing CMMI framework. It provides a structured presentation of maturity levels, allowing for standard TMMi assessments and certification, enabling a consistent deployment of the standards and the collection of industry metrics.”

Here’s what the SEI—the CMMi’s co-ordinator and sponsor—says about it:

“CMMI (Capability Maturity Model Integration) is a process improvement approach that provides organizations with the essential elements of effective processes, which will improve their performance. CMMI-based process improvement includes identifying your organization’s process strengths and weaknesses and making process changes to turn weaknesses into strengths.”

What conclusions could we draw from these three statements?

If a company has achieved CMMI Level 5, yet has a bad test process, then there’s a logical problem here. Either testing isn’t an essential element of effective processes (in which case the TMMI should be unnecessary) or it is (in which case the SEI’s claim of providing the essential processes is unsupportable).

One clear solution to the problem would be to adjudicate all this by way of a Maturity Model Maturity Model (Integrated), the MMMMI, whereby your organization can determine (in a mature fashion, of course) what essential processes are in the first place. Mind you, that could be flawed too. You’d need a set of essential processes to determine how to determine essential processes, so you’ll also need a Maturity Model Maturity Model Maturity Model (Integrated), an MMMMMMI. And in fairly short order, your organization will disappear up its own ass.

Jerry Weinberg points in a different direction, using very strong language. This is from Quality Software Management, Volume 1: Systems Thinking, p. 21:

…cultural patterns are not more or less mature, they are just more or less fitting. Of course, some people have an emotional need for perfection, and they will impose this emotional need on everything they do. Their comparisons have nothing to do with the organization’s problems, but with their own.

“The quest for unjustified perfection is not mature, but infantile.

“Hitler was quite clear on who was the ‘master race’. His definition of Aryan race was supposed to represent the mature end product of all human history, and that allowed Hitler and the Nazis to justify atrocities on “less mature” cultures such as Gypsies, Catholics, Jews, Poles, Czechs, and anyone else who got in their way. Many would-be reformers of software engineering require their ‘targets’ to confess to their previous inferiority. These little Hitlers have not been very successful.

“Very few healthy people will make such a confession voluntarily, and even concentration camps didn’t cause many people to change their minds. This is not ‘just a matter of words’. Words are essential to any change project because they give us models of the world as it was and as we hope it to be. So if your goal is changing an organization, start by dropping the comparisons such as those implied in the loaded term ‘maturity.'”

It’s time for us, the worldwide testing community, to urge Deloitte, the SEI, the TMMI, and the unfortunate testers in Korea who are presently being exposed to the nonsense to recognize what many of us have known for years: maturity models have it backwards.

Common Languages Ain’t So Common

Tuesday, June 28th, 2011

A friend told me about a payment system he worked on once. In the system models (and in the source code), the person sending notification of a pending payment was the payer. The person who got that notice was called the payee. That person could designate somone else—the recipient—to pick up the money. The transfer agent would credit the account of the recipient, and debit the account of the person who sent notification—the payer, who at that point in the model suddenly became known as the sender. So, to make that clear: the payer sends email to the payee, who receives it. The sender pays money to the recipient (who accepts the payment.) Got that clear? It turns out there was a logical, historical reason for all this. Everything seemed okay at the beginning of the project; there was one entity named “payer” and another named “payee”. Payer A and Payee B exchanged both email and money, until someone realized that B might give someone else, C, the right to pick up the money. Needing another word for C, the development group settled on “recipient”, and then added “sender” to the model for symmetry, even though there was no real way for A to split into two roles as B had. Uh, so far.

There’s a pro-certification argument that keeps coming back to the discussion like raccoons to a garage: the claim that, whatever its flaws, “at least certification training provides us with a common language for testing.” It’s bizarre enough that some people tout this rationalization; it’s even weirder that people accept it without argument. Fortunately, there’s an appropriate and accurate response: No, it doesn’t. The “common language” argument is riddled with problems, several of which on their own would be showstoppers.

  • Which certification training, specifically, gives us a common language for testing? Aren’t there several different certification tribes? Do they all speak the same language? Do they agree, or disagree on the “common language”? What if we believe certification tribes present (at best) a shallow understanding and a shallow description of the ideas that they’re describing?
  • Who is the “us” referred to in the claim? Some might argue that “us” refers to the testing “industry”, but there isn’t one. Testing is practiced in dozens of industries, each with its own contexts, problems, and jargon.
  • Maybe “us” refers to our organization, or our development shop. Yet within our own organization, which testers have attended the training? Of those, has everyone bought into the common language? Have people learned the material for practical purposes, or have they learned it simply to pass the certification exam? Who remembers it after the exam? For how long? Even if they remember it, do they always and everafter use the language that has been taught in the class?
  • While we’re at it, have the programmers attended the classes? The managers? The product owners? Have they bought in too?
  • With that last question still hanging, who within the organization decides how we’ll label things? How does the idea of a universal language for testing fit with the notion of the self-organizing team? Shouldn’t choices about domain-specific terms in domain-specific teams be up to those teams, and specific to those domains?
  • What’s the difference between naming something and knowing something? It’s easy enough to remember a label, but what’s the underlying idea? Terms of art are labels for constructs—categories, concepts, ideas, thought-stuff. What’s in and what’s out with respect to a given category or label? Does a “common language” give us a deep understanding of such things? Please, please have a look at Richard Feynman’s take on differences between naming and knowing, http://www.youtube.com/watch?v=05WS0WN7zMQ.
  • The certification scheme has representatives from over 25 different countries, and must be translated into a roughly equivalent number of languages. Who translates? How good are the translations?
  • What happens when our understanding evolves? Exploratory testing, in some literature, is equated with “ad hoc” testing, or (worse) “error guessing”. In the 1990s, James Bach and Cem Kaner described exploratory testing as “simultaneous test design, test execution, and learning”. In 2006, participants in the Workshop on Heuristic and Exploratory Techniques discussed and elaborated their ideas on exploratory testing. Each contributed a piece to a definition synthesized by Cem Kaner: “Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.” That doesn’t roll off the tonque quite so quickly, but it’s a much more thorough treatment of the idea, identifying exploratory testing as an approach, a way that you do something, rather than something that you do. Exploratory work is going on all the time in any kind of complex cognitive activity, and our understanding of the work and of exploration itself evolves (as we’ve pointed out here, and here, and here, and here, and here.). Just as everyday, general-purpose languages adopt new words and ideas, so do the languages that we use in our crafts, in our communities, and with our clients.

In software development, we’re always solving new problems. Those new problems may involve people to work with entirely new technological or business domains, or to bridge existing domains with new interactions and new relationships. What happens when people don’t have a common language for testing, or for anything else in that kind of development process? Answer: they work it out. As Peter Galison notes in his work on trading zones, “Cultures in interaction frequently establish contact languages, systems of discourse that can vary from the most function-specific jargons, through semispecific pidgins, to full-fledged creoles rich enough to support activities as complex as poetry and metalinguistic reflection.”  Each person in a development group brings elements of his or her culture along for the ride; each project community develops its own culture and its own language.

Yes, we do need common languages for testing (note the plural), but that commonality should be local, not global. Anthropology shows us that meaningful language develops organically when people gather for a common purpose in a particular context. Just as we need testing that is specific to a given context, we need terms that are that way too. So instead of focusing training on memorizing glossary entries, let’s teach testers more about the relationships between words and ideas. Let’s challenge each other to speak and to listen precisely, and to ask better questions about the language we’re using, and to be wary of how words might be fooling us. And let us, like responsible professionals and thinking human beings, develop and refine language as it suits our purposes as we interact with our colleagues—which means rejecting the idea of having canned, context-free, and deeply problematic language imposed upon us.

Follow-up, 2014-08-24: This post has been slightly edited to respond to a troubling fact: the certificationists have transmogrified into the standardisers. Here’s a petition where you can add your voice to stop this egregious rent-seeking.

Gaming the Tests

Monday, September 27th, 2010

Let’s imagine, for a second, that you had a political problem at work. Your CEO has promised his wife that their feckless son Ambrose, having flunked his university entrance exams, will be given a job at your firm this fall. Company policy is strict: in order to prevent charges of nepotism, anyone holding a job must be qualified for it. You know, from having met him at last year’s Christmas party, that Ambrose is (how to put this gently?) a couple of tomatoes short of a thick sauce. Yet the policy is explicit: every candidate must not only pass a multiple choice test, but must get every answer right. The standard number of correct answers required is (let’s say) 40.

So, the boss has a dilemma. He’s not completely out to lunch. He knows that Ambrose is (how can I say this?) not the sharpest razor in the barbershop. Yet the boss adamantly wants his son to get a job with the firm. At the same time, the boss doesn’t want to be seen to be violating his own policy. So he leaves it to you to solve the problem. And if you solve the problem, the boss lets you know subtly that you’ll get a handsome bonus. Equally subtly, he lets you know that if Ambrose doesn’t pass, your career path will be limited.

You ponder for a while, and you realize that, although you have to give Ambrose an exam, you have the authority to set the content and conditions of the exam. This gives you some possibilities.

A. You could give a multiple choice test in which all the answers were right. That way, anyone completing the test would get a perfect score.

B. You could give a multiple choice test for which the answers were easy to guess, but irrelvant to the work Ambrose would be asked to do. For example, you could include questions like, “What is the very bright object in the sky that rises in the morning and sets in the evening?” and provide “The Sun” as choice of answer, and the names of hockey players for the other choices.

C. You could find out what questions Ambrose might be most likely to answer correctly in the domain of interest, and then craft an exam based on that.

D. You could give a multiple choice test in which, for every question, one of A, B, or C was the correct answer, and answer D was always “One of the above.”

E. You might give a reasonably difficult multiple-choice exam, but when Ambrose got an answer wrong, you could decide that there’s another way to interpret the answer, and quietly mark it right.

F. You might give Ambrose a very long set of multiple-choice questions (say 400 of them), and then, of his answers, pick 40 correct ones. You then present those questions and answers as the completed exam.

G. You could give Ambrose a set of questions, but give him as much time as he wanted to provide an answer. In addition, you don’t watch him carefully (although not watching carefully is a strategy that nicely supports most of these options).

H. You could ask Ambrose one multiple choice question. If he got it wrong, correct him until he gets it right. Then you could develop another question, ask that, and if he gets it wrong, correct him until he gets it right. Then continue in a loop until you get to 40 questions.

I. This approach is like H, but instead you could give a multiple choice test for which you had chosen an entire set of 40 questions in advance. If Ambrose didn’t get them all right, you could correct him, and then give him the same set of questions again. And again. And over and over again, until he finally gets them all right. You don’t have to publicize the failed attempts; only the final, successful one. That might take some time and effort, and Ambrose wouldn’t really be any more capable of anything except answering these specific questions. But, like all the other approaches above, you could effect a perfect score for Ambrose.

When the boss is clamoring for a certain result, you feel under pressure and you’re vulnerable. You wouldn’t advise anyone to do any of the things above, and you wouldn’t do them yourself. Or at least, you wouldn’t do them consciously. You might even do them with the best of intentions.

There’s an obvious parallel here—or maybe not. You may be thinking of the exam in terms of a certain kind of certification scheme that uses only multiple-choice questions, the boss as the hiring manager for a test group, and Ambrose as a hapless tester that everyone wants to put into a job for different reasons, even though no one is particularly thrilled about the idea. Some critical outsider might come along and tell you point-blank that your exam wasn’t going to evaluate Ambrose accurately. Even a sympathetic observer might offer criticism. If that were to happen, you’d want to keep the information under your hat—and quite frankly, the other interested parties would probably be complacent too. Dealing with the critique openly would disturb the idea that everyone can save face by saying that Ambrose passed a test.

Yet that’s not what I had in mind—not specifically, at least. I wanted to point out some examples of bad or misleading testing, which you can find in all kinds of contexts if you put your mind to it. Imagine that the exam is a set of tests—checks, really. The boss is a product owner who wants to get the product released. The boss’ wife is a product marketing manager. Hapless Ambrose is a program—not a very good program to be sure, but one that everyone wants to release for different reasons, even though no one is particularly thrilled by the idea. You, whether a programmer or a tester or a test manager, are responsible for “testing”, but you’re really setting up a set of checks. And you’re under a lot of pressure. How might your judgement—consciously or subconsciously—be compromised? Would your good intentions bend and stretch as you tried to please your stakeholders and preserve your integrity? Would you admit to the boss that your testing was suspect? If you were under enough pressure, would you even notice that your testing was suspect?

So this story is actually about any circumstance in which someone might set up a set of checks that provide some illusion of success. Can you think of any more ways that you might game the tests… or worse, fool yourself?

The Motive for Metaphor

Friday, September 3rd, 2010

There’s a mildly rollicking little discussion going on the in the Software Testing Club at the moment, in which Rob Lambert observes, “I’ve seen a couple of conversations recently where people are talking about red, green and yellow box testing.” Rob then asks “There’s the obvious black and white. How many more are there?”

(For what it’s worth, I’ve already made some comments about a related question here.)

At one point a little later in the conversation, Jaffamonkey (I hope that’s a pseudonym) replies,

If applied in modern context Black Box is essentially pure functional testing (or unit testing) whereas White Box testing is more of what testers are required to do, which is more about testing user journeys, and testing workflows, usability etc.

Of course, that’s not what I understand the classical distinction to be.

The classical distinction started with the notion of “black box” testing. You can’t see what’s inside the box, and so you can’t see how it’s working internally. But that may not be so important to you for a particular testing mission; instead, you care about inputs and outputs, and the internal implementation isn’t such a big deal. You’d take this kind of approach when a) you don’t have source code; or b) you’re intentionally seeking problems that you might not notice so quickly by inspection, but that you might notice by empirical experiments and observation; or maybe c) you may believe that the internal implementation is going to be varied or variable, so no point in taking it into account with respect to the current focus of your attention. I’m sure you can come up with more reasons.

This “black box” idea suggests a contrast: “glass box” testing. Since glass is transparent, you can see the inner workings, and the insight into what is happening internally gives you a different perspective for risks and test ideas. This is especially important when a) your mission involves testing what’s happening inside the box (programmers take this perspective more often than not); or b) your overall mission will be simpler, in some dimension, because of your understanding of the internals; or maybe c) you want to learn something about how someone has solved a particular problem. Again, I’m sure you can some up with lots more reasons; these are examples, not defnitive lists.

Unhelpfully (to me), someone somewhere along the way decided that the opposite of “black” must be “white”; that black box testing was the kind where you can’t see inside the box; and that therefore white (rather than glass) box testing must the name for the other stuff. At this point, the words and the model begin to part company.

Even less helpfully, people stopped thinking in terms of a metaphor and started thinking in terms of labels dissociated from the metaphor. The result is an interpretation like Jaffa’s above, where he (she?) seems to have inverted the earlier interpretations, for reasons I know not why. Who knows? Maybe it’s just a typo.

More unhelpfully still (to me), someone has (or several someones have) apparently come along with color-coding systems for other kinds of testing. Bill Matthews reports that he’s found

Red Box = “Acceptance testing” or “Error message testing” or “networking , peripherals testing and protocol testing”
Yellow Box = “testing warning messages” or “integration testing”
Green Box = “co-existence testing” or “success message testing”

Sources:
http://www.testrepublic.com/forum/topics/define-red-box-testing-yellow
http://www.geekinterview.com/question_details/27985
http://www.allinterview.com/showanswers/7077.html
http://www.coolinterview.com/interview/10080/

For me, there are at least four big problems here. First, there is already disagreement on which colours map to which concepts. Second, there is no compelling reason that I can see to associate a given colour with any of the given ideas. Third, the box metaphor doesn’t have a clear relationship to what’s going on in the mind or the practice of a tester. The colour is an arbitrary label on an unconstrained container. Fourth, since the definitions appear on interview sites and the sites disagree, there’s a risk that some benighted hiring manager will assume that there is only one interpretation, and will deprive himself of an otherwise skilled tester who read a different site. (To defend yourself against this fourth problem, use safety language: “Here’s what I understand by ‘chartreuse-box testing’. This is the interpretation given by this person or group, but I’m aware there may be other interpretations in your context.” For extra points, try saying something like, “Is that consistent with your interpretation? If not, I’d be happy to adopt the term the way you use it around here.” And meaning it. If they refuse to hire you because of that answer, it’s unlikely that working there would have been much fun.)

All of this paintbox of terms is unhelpful (to me) because it means another 30,000 messages on LinkedIn and QAForums, wherein enormous numbers of testers weigh in with their (mis)understandings of some other author’s terms and intentions—and largely with the intention of asking or answering homework questions, so it seems. The next step is that, at some point, some standards-and-certification body will have to come along and lay down the law about what colour testing you would have to do to find out how many angels can dance on the head of a pin, what colour the pin is, and whether the angels are riding unicorns. And then another, competing standards-and-certification body will object, saying that it’s not angels, it’s fairies, and it’s not unicorns, it’s centaurs, and they’re not dancing, they’re doing gymnastics. And don’t get us started on the pin! Courses and certifications on colour-mapping to mythological figures will be available (at a fee) to check (not test!) your ability to memorize a proprietary table of relationships. Meanwhile, most of the people involved in the discussion will have forgotten—in the unlikely event that they ever knew— that the point of the original black-and-glass exercise was to make things more usefully understandable. Verification vs. validation, anyone? One is building the right thing; the other is building the thing right. Now, quick: which is which? Did you have to pause to think about it? And if you find a problem wherein the thing was built wrong, or that the wrong thing was built, does anyone really care whether you were doing validation testing or verification testing at the time?

Well… maybe they do. So, all that said, remember this: no one outside your context can tell you what words you can or can’t use. And remember this too: no one outside your context can tell you what you can or can’t find useful. Some person, somewhere, might find it handy to refer to a certain kind of testing as “sky testing” and another kind of testing as “ground testing”, and still another as “water testing”. (No, I can’t figure it out either.) If people find those labels helpful, there’s nothing to stop them, and more power to them. But if the labels are unhelpful to you and only make your brain hurt, it’s probably not worth a lot of cycles to try to make them fit for you.

So here are some tests that you can apply to a term or metaphor, whether you produce it yourself or someone else produced it:

  • Is it vivid? That is (for a testing metaphor), does it allow you to see easily in your mind’s eye (hear in your mind’s ear, etc.) something in the realm of common experience but outside the world of testing?
  • Is it clear? That is, does it allow you to make a connection between that external reference and something internal to testing? Do people tend to get it the first time they hear it, or with only a modicum of explanation? Do people retain the connection easily, such that you don’t have to explain it over and over to the same people? Do people in a common context agree easily, without arguments or nit-picking?
  • Is it sticky? Is it easy to remember without having to consult a table, a cheat sheet, or a syllabus? Do people adopt the expression naturally and easily, and do they use it?

If the answer to these questions is Yes across the board, it might be worthwhile to spread the idea. If you’re in doubt, field-test the idea. Ask for (or offer) explanations, and see if understanding is easy to obtain. Meanwhile, if people don’t adopt the idea outside of a particular context, do everyone a favour: ditch it, or ignore it, or keep it within a much closer community.

In his book The Educated Imagination (based on the Massey Lectures, a set of broadcasts he did for the Canadian Broadcasting Corporation in 1963), Northrop Frye said, “Outside literature, the main motive for writing is to describe this world. But literature itself uses language in a way which associates our minds with it. As soon as you use associative language, you begin using figures of speech. If you say, “this talk is dry and dull”, you’re using figures associating it with bread and breadknives. There are two kinds main kinds of association, analogy and identity, two things are like each other and two things that are each other (my emphasis –MB). One produces a figure of speech called the simile. The other produces a figure called metaphor.”

When we’re trying to describe our work in testing, I think most people would agree that we’re outside the world of literature. Yet we learn most easily and most powerfully by association—by relating things that we don’t understand well to things that we understand a little better in some specific dimension. In reporting on our testing, we’re often dealing with things that are new to us, and telling stories to describe them. The same is true in learning about testing. Dealing with the new and telling stories leads us naturally to use associative language. Frye explains why we have to be cautious: “In descriptive writing, you have to be careful of associative language. You’ll find that analogy, or likeness to something else, is very tricky to handle in description, because the differences are as important as the resemblances. As for metaphor, where you’re really saying “this is that,” you’re turning your back on logic and reason completely because logically two things can never be the same thing and still remain two things.”

Having given that caution, Frye goes on to explain why we use metaphor, and does so in a way that I think might be helpful for our work: “The poet, however, uses these two crude, primitive, archaic forms of thought in the most uninhibited way, because his job is not to describe nature but to show you a world completely absorbed and possessed by the human mind…The motive for metaphor, according to Wallace Stevens, is a desire to associate, and finally to identify, the human mind with what goes on outside it, because the only genuine joy you can have is in those rare moments when you feel that although we may know in part, as Paul says, we are also a part of what we know.”

So the final test of a term or a metaphor or a heuristic, for me, is this:

  • Is it useful? That is, does it help you make sense of the world to the degree that you can identify an idea with something deeper and more resonant than a mere label? Does it help you to own your ideas?

Postscript, 2013/12/10: “A study published in January in PLOS ONEexamined how reading different metaphors—’crime is a virus’ and ‘crime is a beast’—affected participants’ reasoning when choosing solutions to a city’s crime problem…. (Researcher Paul) Thibodeau recommends giving more thought to the metaphors you use and hear, especially when the stakes are high. ‘Ask in what ways does this metaphor seem apt and in what ways does this metaphor mislead,’ he says. Our decisions may become sounder as a result.” Excerpted from Salon.

World Agile Qualifications Board; God Help Us

Monday, March 30th, 2009

The World Agile Qualifications Board should be seen as an embarrassment even to regular peddlers of certification.

The WAQB has apparently been established recently—but when? By whom? There is a Web site. I hesitate to link to it, because I don’t want people to see it… but on the other hand, I do want people to see it, so that they can observe from the outset how these things work.

Note that there is no information to be found on who the World Agile Qualifications Board is. No human’s name appears on the Web site. None of the people whom I respect in the field of Agile development (many), nor any of the people that I don’t respect (somewhat fewer, but still numerous) appear to be willing to identify themselves with this shadowy organization, either on the site or in other forums. There is a LinkedIn group; see below.

Go to the site and you’ll see just how comical this gets. Under the link “Find Training and Certification”, those reading as I write (March 29, 2009) will see a graphic that includes the logo for the London Underground (this trademarked logo is doubtless used without the permission of Transport for London) and the words “After London, you deside where to go next” and “Lets hear from you and your team”. Then there’s a list of countries, preceded by the suggestion “Clik on your chose:” (Yes, everything misspelled above is really spelled that way in the original.)

The WAQB is, apparently, offering a course certificate in Agile Testing. The fee for the course is £990.

What is the course about?

The course provides basic knowledge of agile testing. After the course there is the opportunity to sit an examination for the WAQB-T Agile Testing Foundation Certificate. Agile test is gaining recognition as a specialized field. In thedevelopment of systems and software, testing can account for 30-40% of the total cost. It is possible to reduce this cost significantly and still achieve improved quality by adapting agile testing mindset

Who should attend?

This is for developers and tester. We mention developer’s upfront, because this is NOT only for traditional testers The course is for anyone involved in agile development. It is worthwhile for project leaders and developers, who need an introduction to agile testing or who test software.

The text above, errors and all, including the missing periods, is cut and pasted from the original. That is, I’m not making any of this up. Here’s a little more snipped from the site:

Express courses 2 days* : Participants who hold another formal certification like: Scrum Master, ISTQB/ISEB Foundation, PMP, PMI, Prince2 or similar a 2 days intensive courses is available.

The WAQB-T Foundation Certification – WAQB-T Agile Testing Foundation Certificate – statup May 2009

The WAQB-D Foundation Certification – WAQB-D Agile Development Foundation Certificate – statup May 2009

The WAQB-M Foundation Certification – WAQB-M Agile Team Member Foundation Certificate – statup May 2009

As I mentioned above, there is a LinkedIn group. I am the 27th member. No Agilist that I recognize is a member, and very few members of the list identify themselves as a member of other Agile-related LinkedIn groups. No one in my survey of the list members (hey, there are only 26 others, so I looked at most of them) appears to make any claims related to the founding of or involvement with the WAQB.

There is a review board, and you too can join!

WAQB will use the techniques from Open Source to ensure that the quality of the syllabus is of high quality.

You can now become member of the Review Board. As a member of this board you will be asked to review topics in the syllabus. But you will also have the opportunity to suggest new topics, or change of topics.

In this way WAQB are sure that we will get a high quality of the syllabus and course material at any time. The fast changing agile world will force us to be up front and to work in new stuff often.

…but not quite up front enough to identify ourselves from the get-go.

In my opinion, all this shows signs for the WAQB being a scam, and a racket. My opinion is that of an experienced tester, a member of several testing communities, and a teacher of and consultant in testing. I consider the WAQB to be a racket even worse, even more transparent, even more nakedly a way to separate people from their money than the usual certification schemes. Everyone is free to make his own decision, but I believe that one would be a fool to have his (her) pocket picked by these people (or this person). And you’re not a fool, right?

In fact, you’re an upstanding member of your community, and so you will warn your colleagues and your managers, and everyone else who might innocently or uncritically seek or support certification, agile or otherwise, that isn’t skills-based—the kind of certification that is roundly and rightly dismissed by many thoughtful people, including Elisabeth Hendrickson, James Bach, Tom DeMarco, and the Agile Alliance itself. Oh, and by me.