Prompted by Simon Morley's blog post about showing your thinking
, I thought I'd show my thinking on something one of his commenters mentioned - a testing playbook
.Let's set the scene:
I work (currently) in a scripted shop. Lots of procedural manual test scripts, waterfall delivery process. It's long bothered me that we have such a colossal amount of duplication in our manual test scripts. Whenever I find myself having to make exactly the same update in a number of different scripts, I have to ask myself whether this is really good test design. When creating new test scripts is a matter of cutting and pasting from a test script and changing some of the data, I have to ask myself: if I don't think this is a good thing to do in code, why am I doing it here? Don't the same drawbacks apply?
Furthermore, with many tests, the devil really is in the detail. When that detail is buried in some step of a 60 step manual test procedure, and you have hundreds of these test cases for a project - how the heck is anyone going to be able to take a good clear look at that and identify any missing test cases? Essentially, this relies on the test lead for the project having everything in his or her head. While I'm a big fan of relying on the skill of your people, this strikes me as making life a lot harder than it needs to be: it's not showing your thinking, so that other people can comment and contribute.That's the context - what's the specific problem?
So, as this is my last week in my current workplace, I have a meeting on Wednesday to discuss what we've learned from the last big project I worked on: a first attempt at bringing an area of testing into the test team that has previously been done purely by specialists in that area.
There were a number of challenges with this project, which exacerbated the issues I've noted above. For a start, generating our test data was challenging and in some cases a major pain, requiring painstaking setup over a number of days - including all this detail into the test scripts made them even more unwieldy, and later releases meant having to change some of our setup procedures. And there were also design changes, as there always are with such a large project. Updating test scripts was very time consuming. Given that we weren't knowledgeable in this new area, we also relied heavily on our specialists to review and comment on our test scripts. This was an enormous time sink, especially as our key technical expert was already overbooked and ended up doing ridiculous hours to get everything done.And my solution?
What I'm going to propose is a hybrid approach: use James Bach's testing playbook idea to give us one source for all the myriad things important to our test design. Data models, mappings, our sketches of business scenarios, SQL queries common to a number of tests, procedural descriptions of how to setup some of our complex test data, and so on.
Essentially, this allows us to take a large amount of detail OUT of the test scripts, making them easier to read, manage, and maintain. (It also gives us an excellent resource for doing more exploratory testing - I would love to see this happen, as we've had some success in the past with using exploratory testing on various projects, but for various reasons this has never filtered through to actually change our approach in general.) It also gives us one place to update these things, one document for us to hand out to other teams, other testers for review and comment - instead of being scattered across a hundred test scripts, we can see our thinking in one place. (It's also outside our proprietary test management system, which unfortunately has a licensing agreement that means everyone outside the test team only gets to see the defect tracker. This caused additional work when we were asking people outside the test team to comment on our tests, so avoiding that is a good thing in my opinion.)And over to you
So, now I've shown my thinking - does it make sense? I won't get to see if this works - I'm moving on, but I hope it'll prove useful to my colleagues as something to trial, at any rate (those who I've discussed it with so far sound receptive). Has anyone done anything similar?