Defect clustering & Pesticide paradox

OVERVIEW:

Many testers have observed that defects tend to cluster. This can happen because an area of the code is complex and tricky.

Testers will often use this information when making their risk assessment for planning the tests, and will focus on known Area called as 'Hot spot'.

There is also a contradictory concept to the defect clustering called pesticide paradox is that if the same tests are repeated over and over again, the same set of test cases will no longer find any new defect,hence test cases need to be revised.

We need to consider both of the above concepts depending on the situation.

Let us discuss both the phenomenon-

Defect clustering is based on the Pareto principle, that is 80-20 rule. It can be stated that approximately 80 per cent of the problems are caused by 20 per cent of the modules.

When we are testing a new software against the user requirements,defects will be finding at a large numbers in certain flow or area of the code which is more complex or critical. When the same software or code is to be tested again & again,after some changes or modifications,we will find the defects in initial few iterations of testing by identifying & concentrating on the 'Hot spot'.

Tester can focus the same area in order to find the more number of defects. It will reduce the time & cost of finding defects as by focusing on certain areas more bugs can be find in order to improve quality of the software.

But after certain number of iterations of testing,as the testing improves we see the defect numbers dropping, the most of the bugs will be fixed & the 'Hot spot' area will be cleaned up. Developers will be extra careful in a places where testers found more defects & might not look in other areas. Hence by executing the same test cases will no longer help to find the more defects. The test cases need to be revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.

Now we have two choices.

1.To write whole new set of test cases to exercise different parts of the software.

2.To prepare new test cases & add them to the existing test cases.

In the first case,we will be finding more potential defects in an area where we did not focused earlier or the area at which developer was not extra cautious as the tester was not raising the defects from these areas. But by neglecting earlier identified defect cluster, we are certainly taking a risk of giving less importance to the area which used be very critical in order to find the more defects in earlier iterations of testing.

In the second case,we can find the new potential defects in the new area as well as we can focus on the earlier identified defect cluster. But on the other hand the number of test case will become so large that it will increase the testing time & in turn increase the cost of testing. Hence too many useless tests may be an overhead.

To deal with this we can approach the middle path for balancing the disadvantages of both the listed choices. We can identify & remove the test cases that are not much important & have failed to detect any defect in certain number of test cycles.

For example, if you have 10 tests that cover the same area and none of them have detected bugs in number of cycles (for example,5 test cycles), then we should review them and reduce the number of test cases.

If test has not reported a bug in the last 5 runs, review it and verify its importance and weighting whether we should keep or archive it.

In this way we can keep the check on number of useless test cases without compromising with the quality as all the test cases covering important areas of the software will be retained. We have to follow this method regularly & keep updating/modifying our test cases whenever there is a change/modification occurs in the software.

SUMMARY:

In the initial iterations of testing,identifying the defect cluster is useful but it is not a good practice to assume that we can create a final test cases that will discover all the defects for once & for all.

Even though the created test cases have a very high coverage percentage & high rate of finding defects we still need to keep reviewing test cases regularly.

Views: 7330

Add a Comment

You need to be a member of Software Testing Club - An Online Software Testing Community to add comments!

Join Software Testing Club - An Online Software Testing Community

Comment by Michael Corum on October 7, 2013 at 14:58

@Jeff Lucas - Yeah, that's lesson #25 in Lessons learned in Software Testing by Kaner, Bach, and Pettichord. I thought you were speaking more in the classical (yet more narrow) sense of model-based testing with respect to finite state machines. I like that both of our interpretations work. I'll be working on identifying the pros and cons of modeling where these clusters occur in the system so that we can test around those areas to better identify the root cause. 

Comment by Jeff Lucas on October 6, 2013 at 13:00

@Michael Corum - I think it was James Bach or Michael Bolton who once said that, in a way, all testing is model based. The point I was making was that thinking about the underlying causes of this phenomenon can be  beneficial. When I wrote this comment, I decided to elaborate on defect clustering a bit here: 

http://testtooljunkie.blogspot.com/2013/09/down-rabbit-hole-of-clus... .

Comment by Michael Corum on October 6, 2013 at 11:12

@Jeff Lucas - Interesting thought about this being a form of model-based testing. So, by modeling where these clusters occur in the system we can test in and around those areas to better identify the root cause?

Comment by Jeff Lucas on August 31, 2013 at 18:41

An interesting element of the pesticide paradox is its impact in multiple dimensions. Consider this:

  • One may start correlating defects found with test cases and find that only new test cases produce defects. One reason for that could be training. In a quick, session-based test session the tester may identify many defects that they are familiar with in the test cases. Likewise, developers begin to identify those practices that produced those defects.
  • Each defect cluster may identify a specific flaw in the code development. If the coders didn't identify the root cause of the flaws and change the practices, then these clusters may occur many times in different places. Was there a pattern to those clusters? This is can be thought of as a form of model-based testing, where we are looking for the underlying cause of the clusters.
  • The clustering may be layered in complexity. New test cases may extend the previous tests down one layer without addressing the root cause of the defects. Example: A tester submits a set of defects one iteration where the GUI doesn't identify string overflows to the user. That is fixed with GUI field checks in all of the identified places. Next iteration, a middle-tier defect is found when an long string is entered in a field not addressed. Then the next iteration, database errors in the log are found that are due to other fields that are not addressed. Instead, the system architecture as a whole should have been evaluated initially to identify the extent of the problem.
Comment by Ranjeet Jawale on June 17, 2013 at 4:59

Agree with Andreas Kleffel.

Testing process has 2 basic tasks 1.Finding bug  & 2. Verify

Comment by Andreas Kleffel on June 16, 2013 at 6:45

Sure, one shouldn't waste time executing the same tests again and again. Except you aree a machine.

I think that's a common problem with "test cases". Well the basics: Testing must find bugs, but also verify.

When you execute the same test over and over again you do both things with one technique. Why?

I prefer breaking this up -> I simplify a little bit: use only exploratory testing for finding bugs and use test plans and plan-based execution only for verification. I think that this is an efficient solution to the problem you describe.

Adverts

Ministry of Testing

© 2014   Created by Rosie Sherry.

Badges  |  Report an Issue  |  Terms of Service