- Chris McMahon on Software
- Posts
- Test Cases for Exploratory Testing
Test Cases for Exploratory Testing
Rethinking the test case for a new context
Test Cases for Exploratory Testing
Define A Test Case
A test case is a single experiment run against working software. We expect that the software will behave in some specific way, or in some general way even. If the software meets our expectation, the test case is said to pass. If the software does not meet our expectations, then the test case fails.
Many testers confuse a test case with the description of a test case. Without working software to hand, a test case might be written down in advance, based on other documentation, like requirements or design, or even on general ideas of what a certain kind of software should do.
Because I happen to be working on such a project right now, let us talk about test cases for a feature that allows users to leave comments. Comments are pretty well understood in the context of software. We could write down a whole lot of test cases in advance based solely on what we know about how comments should work, even in the absence of any working code.
But why? If we have a mutual understanding of how comments should work, why bother to write down test cases in advance? We can simply wait to exercise working code and see what we have when that software arrives in a testable state.
Exploratory Testing
Elisabeth Hendrickson, author of the book on ET “Explore It!” once defined Exploratory Testing as “…simultaneously learning about the software under test while designing and executing tests, using feedback from the last test to inform the next.”
We can certainly do ET on our comments project without any test cases created ahead of time. But before we do that, let’s talk about one more aspect of ET:
Charters and Sessions
In the context of ET, a “charter” is a statement about the area of the feature you intend to work. Some likely charters for a comments feature would be “Admin”, exploring how administrators can enable and disable comments, create Moderator comments, delete comments, etc. etc. Another charter might be “Users”, exploring how users may create comments, reply to comments, etc. Another charter might be “Admin/User interaction” as you would probably suppose.
I find that when I run out of test ideas for a certain charter, I can usually think up a new charter to get me back to testing the software. And just like the tests themselves, we can make up new charters based on feedback from the previous charters.
Some testers like to devote a “session” to a charter, a dedicated period of time to explore only one charter. I suggest that ET in a session may lead you outside the charter you had intended, and there is nothing wrong with that
Tests Cases Again
In the context of ET, a test case is not a thing that is written down that we execute later, a test case is a thing we do RIGHT NOW with working software. And a test result is not something we write down on a ticket, pass or fail, it is something that we tell our team right away, if the result is interesting. I find that a really nice place to report test results is in a chat channel where the information is welcome. Let’s say we’re testing our “Adimin” charter. In our chat channel, for passing tests, we could say:
“The Moderator badge on the Moderator comment looks really nice on both Admin and User side”
“The dashboard view of comments is very handy for identifying comments across the application”
For failing tests we could say things like:
“Too many characters in the comment textarea causes a UI error”
“Users are allowed to report abuse on comments made by Moderators, this is probably a bad idea”
Do failing tests require us to make a work ticket to fix them? Maybe, that depends on your team. In my experience, the less time we spend making and resolving work tickets, the more time we have to test and code. Sometimes making a work ticket is the right thing to do, sometimes collecting issues in a document is the right thing to do, sometimes you can just fix the problem on the spot. Your situation will be unique.
Test Cases for Exploratory Testing Again
In summary, when you think about test cases for Exploratory Testing, instead of thinking about something you plan to do, instead think of a test case as something you just did. Was that thing really good? You should tell your team. Did that thing not meet your expectations? You should tell your team, and you should together figure out what to do about that failure.
Reply