I am giving a talk on how to write a good bug report. Now, like me you probably only ever write really good, full and insightful bug reports......but, you might have come across some really bad reports, you know the sort of thing... " I just tried to save a new person record and it didn't work properly". (Yes, that really is the full text from a bug report on one of my projects I am ashamed to say)


If you have any real life example of bad bug reports you could share with me as examples I would be most grateful. You don’t have to name names (fun though that is), but if you could put a little context around the report that would help. i.e.


The project was a CRM system and one of the main features was a very rich and powerful system to allow you to link new contacts (person records) with various departments within the organisation, and in so doing give them a tailored set of information. One tester submitted the following bug as a high priority bug.


 " I just tried to save a new person record and it didn't work properly".


Many thanks in advance for your help.



Views: 9709

Reply to This

Replies to This Discussion

I might not be able to provide you exact examples , but here a few different categories of bad bug reports that I have encountered ..

1. Vague ones which just report the symptom
" I just clicked and it crashes "

2. Ones which just use adjectives instead of numbers
" system is really slow "

3. Lack of legacy behaviour , baseline comparisons etc
" it used to work ..." " system is slower ( than???) "

4. bugs in which there is no effort to zero in on the root cause . No effort on removing the variables

5. bugs which do not mention the user impact , yes includes too techie bugs :)
" we are using uninitialized variables here..here..and here.."

6. bugs with poor reproducibility information
"happens sometimes"

7. bugs which just report the problems ( and not the steps to reproduce or the manifestation/symptoms of the problems )
"feature send mail is broken "

8. and the usual deficient in information ones e.g with no logs , no cutomer data info , no platform info , no SW/HW info etc

I see far too many of (2) and (5).

It's not easy to test the fix for a bug if it's not clear what circumstances caused the original bug to appear. Performance issues without any metrics or techie bugs which don't even tell you what business scenario is affected are particular bugbears of mine.

One where an excitable tester (no names) has gone into great depth about a cool name field validation defect but neglected to tell the vendor which system the problem was occuring on (out of a possible 7 systems).
Ones where the system under test is wrongly recorded so that much time is spent head scratching, trying to replicate a bug in the wrong environment.
Too many examples I could give you so it inspired a blog post

Cem Kaner published his Bug Advocacy paper several years ago but it is still relevant today and a lot of testers and test managers would be well advised to read and understand it
Hi Phil

I had not read Cem Kaner's article, so I am very glad you posted it, it is as you say well worth a read.


This happened where I work recently. I kept getting a load of bug reports of the "this doesn't work" calibre, and then a couple of weeks in she asked what the URL was for the testing server (the one where she was meant to be looking at to test) Yep, she'd been looking at the wrong server the entire time, despite it being the main testing server for (and only server for this site for the most part of several) months.

I feel that pain...

A legendary one in my old workplace: "programme did not re-open properly when computer is powered down and restarted"
This was one of my favorites from my old work place.

Description: Error message is stupid.

Expected result: Error message is less stupid.

Come on, clearly the message was stupid! You should've taught it a thing or two!



© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service