Unfortunately I don’t know what or who is the source of this image, but I guess most of you have seen it. It’s so much true – a lot of bugs we testers discover never gets fixed. It was hard for me to get used to that. It was tempting to escalate them; when it didn’t help – to stop discovering them. Today I think I’ve learned that the way to become a good tester is to understand that my goal is a quality, while bug reports are the means to describe and help improving quality. In this post I’ll try to describe what I mean by that.
Does every bug deserve to be fixed?
That’s the question behind the image, isn’t it? Do you know for each of the bug you report – on an average what would be the consequences of not fixing it? What’s the probability that customer will report it? I don’t know that, but I know something else. On one particular project we used to have about 100 open bugs reported by testers not included into the official list on known issues (see further readings for more of my view on that fact). We had about 100 bugs reported by customer in the first year of production. Could you imagine how much those two sets overlapped? There were only 3 defects that we were able to match up.
Does it mean no person ever run into any other of 97 problems? I don’t think so. It’s quite possible that some of those problems were not as important to customer to waste their time reporting. So maybe our users have discovered a few more of them, but none of the managers and decision makers…
Tester’s mistake: overstating the importance of the bug
When you work in a large team in a typical commercial project (where time is short as always) and you push developers to fix your bugs, it meant they will delay fixing bugs reported by other testers, they will delay developing other features and implementing more bugs as a result. It means you will find those bugs later.
However as a tester I’m an inquisitive and I do find a lot of bugs. When I find them I “sell” them to developers – I describe the worst consequences of not fixing that bug. However when I have to decide the priority (and/or severity) of the bug – I need to change my negative attitude. I need to change it even more when it’s time to report status to management. I admit my mistake: I do sometimes overstate the importance of particular bugs or the number of them. It’s quite hard for tester to resist doing that, don’t you think?
An illustration – a story partially based on real case:
A new project manager is assigned to my project and demands a test report. I give one to him, describing the situation and saying something like: there are 1001 test case so far but only 666 of them are in status passed, there are 101 bug failing or blocking other test cases and 40 of them are of priority trivial, 30 – minor, 20 – major, 10 – critical, 1 – blocker
PM calls me a few hours later and asks: so what do you think? It this quality under control? Do you need more testers? Do we need developers to do put more effort on bug-fixing reducing their development speed? Anything else?
My first reaction “so, have you ever red my test report which You requested?” But I calm down and tell him …
OK, I will not continue this story (partially because I don’t remember, partially because I want to guess.
Disclaimer and Further reading
Numbers and even story details I’ve shared are made-up, but they are based on real experience. I don't say those are typical numbers. Every project is unique as context is unique in every project.
There is a partially related topic: communicating bugs with stakeholders
Add a Comment