The trigger of this post was a comment on my other blog, asking for the results of a research I did on test case management tools at the beginning of 2011. This research was stopped after some months, as I progressively realised that a test case management tool wouldn’t fit my current context, but I never officially concluded it. I am not concluding it with this post, maybe I resume it in the future, but two years after this research began, the fact is that we’ve lived and tested properly without test cases or test case management tools. This is how we did it.
Before I start, I am not trying to say what’s best or not regarding this topic, this is just a personal experience, a case study if you want. Just food for thought, something to consider if you are in this situation too. Hope this helps :-)
2 web-based applications to test
Both are mature applications, with more than 3 years of existence each
+500 use cases (and growing!), ranging from simple CRUD ones to huge and complex internal system integration cases with several conditional interpretations and business logic rules implied
2 lone-wolves testing department, paired up with 4-5 supersmart developers
An average of 2-3 releases per month
No automation available (at the beginning of that research)
The first number that came to my mind was that we needed a tool that managed at least +500 test cases (say, the happy path of each use case), so we were supposed to develop +500 test cases, and we were supposed to maintain those +500 test cases too, without hiring new teammates or slowing down the release velocity we had.
We couldn’t take that investment, our focus would have switched from testing to actually test case creating and maintaining, feeling also this would become an endless task, as there are new use cases every month, plus revisits of older ones, implying new test cases to write and old test cases to maintain, plus at some point exploring unhappy paths if time was available.
At the very beginning, two years before the research started, I tried to create the test cases in Excel format at the beginning of the testing phase (we were doing waterfall at that time) before testing hands-on, and got the following learnings:
Test cases evolve while testing, so your mindset and the tool you use to manage those cases has to be ready for that.
Test cases maintenance and reutilisation is a hot topic, hard to handle in a fast-changing environment like ours, for reasons stated above.
With this approach, I found that I invested more time creating, editing and adapting to current context my Excel test cases than testing. Progressively, I abandoned ideas; first, I abandoned the idea of reusing test cases (so the maintenance issue disappeared as well); and then, I abandoned the idea of preparing a whole set of test cases before starting testing, embracing a more dynamic test design and execution. I was doing exploratory testing without knowing it, but felt uncomfortable with the idea of having no test documentation. That’s why I started researching for a test case management tool.
While researching for specific tools, I also tried to find out what others in similar situations were doing, then I discovered that exploratory testing was a real thing, even highly respected by several influential testers; and then discovered mind mapping as well. Right after that, I tried to switch the use of Excel and Notepad in my unofficial exploratory activities to start using Xmind as a support tool to perform those.
I had an “ah-ha moment” the very first time I did this, as mind mapping is the perfect tool to develop and execute tests while testing, evolving your test ideas while actually testing, having lots of fun and empowering the creative side of testing, which also felt rewarding. In addition to this, using mind mapping allowed us to start developing a “test ideas mind map” at the very beginning of new projects and iterations and evolving it while we dug deep into their content, sharing it with the development team and the stakeholders, achieving a huge test ideas compilation just before starting testing hands-on, that could evolve as well as the testing took place.
...the moment I asked myself ¿why do you want to invest such amount of time in scripting those test cases when you are delivering reasonably well without them? The main answers that came into my mind referred to test execution records, coverage and reutilisation. So I analyzed these answers deeply.
To have some test execution records is not enough reason to implement a test case management tool, so I kept digging in the other answers.
About coverage... we are novice in exploratory testing, so I was unsure about leaving important areas untested in such a dynamic approach, but the idea of getting a test case management tool was not enough for that. At that point I preferred a simple “then get better at exploratory testing!”, and, on the other side, feeding those tools would have left us less time to test effectively, so coverage would be hurt as well. Also, what coverage offers you a test case management tool? The coverage over the test cases you have been able to get into it, which is a biased way of looking into coverage. I felt more confident in growing coverage on important areas as we tested.
Reutilisation has been a recent milestone for us, deciding to not reuse anything so far, in order to recreate and review test ideas when needed, sharpening our senses and exploratory skills at every iteration, but we will probably enhance this conclusion soon...
As you can read, we found no reason to move our current fruitful effort to a scripted approach. Also I was afraid of us falling in the “passed test cases false comfort”, confusing “scripted test execution finished” with “testing finished”, and decided not to be tempted about it.
Once we officially decided that we were doing exploratory testing with mind maps and that we were (and are!) happy about it, we started to iterate this idea to improve our performance. Things we have done already:
Investing some time in automate validation checks, to combine our exploratory in-depth performance with some general and light functional checking coverage in other areas.
Scripting some test cases for the most critical use cases we have (I am talking about 5 or so), creating a mini scripted test suite to be executed in every testing cycle, before installing a new version. This suite won’t be ever automated, as it takes 15 minutes to be executed and I want the 7 senses of a thinking tester into it, and a robot does not offer me that safety. Smoke tests are scripted too. These are simple proofs that exploratory and scripted testing can coexist.
Things we are bound to do in the near future:
Thinking again about reutilisation. Maybe we could develop some mind map checklists to start the test ideas compilation, updating these documents as we use them, accepting they have to be updated before evolving them, but taking advantage of the knowledge and mental effort already executed.
Developing some sort of testing low-tech dashboard, in order to communicate easier the status of the testing effort to the team et altri.
This is it. A 4-year process summarised in less than 1500 words. I hope to be able to write some more about this in the following 4 years, let’s see if we get any better on this endless and fantastic subject!
Add a Comment