DevelopsenseLogo

Learning from Little Bugs

I was investigating some oddness in Google Search today. Perhaps I’ll write about that later. But for now, here’s something I stumbled upon as I was evaluating one of the search results. Is this a problem? I think the developers of this site mean to say “Free delivery in the GTA for orders over $99″. (The GTA is the Greater Toronto Area.) Next question: is this a big problem? Will … Read more

Talking About Testing

Frequently, both online and in face-to-fact conversations, testers express reservations to me about making a clear distinction between testing and checking when talking to others. It’s true: “test” is an overloaded word. In some contexts, it refers to a heuristic process: evaluating a product by learning about it through experiencing, exploring and experimenting; that’s what testers refer to when they’re talking about testing, and that’s how we describe it in … Read more

“What Tests Should I Automate?”

Instead of asking “What tests should I automate?” consider asking some more pointed questions. If you really mean “how should I think about using tools in testing?”, consider reading A Context-Driven Approach to Automation in Testing, and Testing and Checking Refined. If you’re asking about the checking of output or other facts about the state of the product, keep reading. Really good fact checking benefits from taking account of your … Read more

Lessons Learned in Grating Cheese

“Lessons Learned in Grating Cheese” by Michael Bolton | TestFlix 2020

0:44 / 8:06 #TestFlix#Testing#SoftwareTesting

About this Talk: “Lessons Learned in Grating Cheese” by Michael Bolton This is a video recording of a conversation between Michael Bolton and Ajay Balamurugadas, after Michael’s first attempt to produce a Testflix video. It’s about how things can miss the mark when you’re too close to them — and how a tester’s critical eye might be able to help.

Top Takeaways: The takeaways are yours to decide!

Speaker Bio: Michael Bolton is a consulting software tester and testing teacher who helps people to solve testing problems that they didn’t realize they could solve. In 2006, he became co-author (with James Bach) of Rapid Software Testing (RST), a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure. Since then, he has flown over a million miles to teach RST in 35 countries on six continents.

Michael has over 30 years of experience testing, developing, managing, and writing about software. For over 20 years, he has led DevelopSense, a Toronto-based testing and development consultancy. Prior to that, he was with Quarterdeck Corporation for eight years, during which he managed the company’s flagship products and directed project and testing teams both in-house and around the world.

Contact Michael at michael@developsense.com, on Twitter @michaelbolton, or through his Web site, http://www.developsense.com.
Twitter – https://twitter.com/michaelbolton
LinkedIn – https://www.linkedin.com/in/michael-b…

This video is of one of the Atomic Talks presented at #TestFlix– Global Software #Testing Binge, 2020. TestFlix 2020 had: -107 Speakers from 44 Countries -5200 Registrations from 91 Countries -Over 2100 attendees on the Event Day

TestFlix 2020 Proud Sponsors:
TestProject – https://testproject.io
AI Appstore – https://www.aiappstore.com
Trigent Software – https://www.trigent.com/services/qa-t…
Sauce Labs – https://saucelabs.com
Testsigma – https://testsigma.com
Testvox – https://testvox.com
Mozark – https://mozark.ai
Moolya Testing – https://moolya.com

#SoftwareTesting #Automation #SoftwareQuality #SoftwareDevelopment

Exploratory Testing on an API? (Part 4)

As promised, (at last!) here are some follow-up notes on previous installments in the series that starts here. Let’s revisit the original question: Do you perform any exploratory testing on APIs? How do you do it? To review: there’s a problem with the question. Asking about “exploratory testing” is a little like asking about “vegetarian cauliflower”, “carbon-based human beings”, or “metallic copper”. Testing is fundamentally exploratory. Testing is an attempt … Read more

Very Short Blog Posts (34): Checking Inside Exploration

Some might believe that checking and exploratory work are antithetical. Not so. In our definition, checking is “the algorithmic process of operating and observing a product, applying decision rules to those observations, and reporting the outcome of those decision rules”. We might want to use some routine checks, but not all checks have to be rote. We can harness algorithms and tools to induce variation that can help us find … Read more

s/automation/programming/

Several years ago in one of his early insightful blog posts, Pradeep Soundarajan said this: “The test doesn’t find the bug. A human finds the bug, and the test plays a role in helping the human find it.” More recently, Pradeep said this: Instead of saying, “It is programmed”, we say, “It is automated”. A world of a difference. It occurred to me instantly that it could make a world … Read more

You Are Not Checking

Note: This post refers to testing and checking in the Rapid Software Testing namespace. This post has received a few minor edits since it was first posted. For those disinclined to read Testing and Checking Refined, here are the definitions of testing and checking as defined by me and James Bach within the Rapid Software Testing namespace. Testing is the process of evaluating a product by learning about it through … Read more

A Context-Driven Approach to Automation in Testing

(We interrupt the previously-scheduled—and long—series on oracles for a public service announcement.) Over the last year James Bach and I have been refining our ideas about the relationships between testing and tools in Rapid Software Testing. The result is this paper. It’s not a short piece, because it’s not a light subject. Here’s the abstract: There are many wonderful ways tools can be used to help software testing. Yet, all … Read more

On Green

A little while ago, I took a look at what happens when a check runs red. Since then, comments and conversations with colleagues emphasized this point from the post: it’s overwhelmingly common first to doubt the red result, and then to doubt the check. A red check almost provokes a kind of panic for some testers, because it takes away a green check’s comforting—even narcotic—confirmation that Everything Is Going Just … Read more