Metrics are always good for an argument  discussion ( see  here for discussion on whether one type of metric is ethical/unethical ) so trying to keep this to a discussion and not an argument....

I read two posts recently on metrics 

First one was from Pete Walen -  On Metrics & Myths or Your Facts are From the Land of Make Believe in which he warns

"Be careful in dealing with metrics - not all is what they appear.
Be careful when playing with dragons for you are crunchy and good with ketchup "

 

Meanwhile Matt Heusser says It's Your Time To Shine if you can

"Provide the community some case studies of metrics programs that don't suck"

 

Rather than argue about good/bad/ethical, I thought it would be interesting to see what people out there are doing at the moment

 

What are the main metrics you are using at the moment in your project ?

Did you initiate them or were you asked for them ?

Have you been eaten by them or are they providing value - if so, how ?

 

Views: 305

Reply to This

Replies to This Discussion

err, yeh, that's what I'm asking - what are people actually doing out there in Tester Land ?

What are you measuring, how - and why ?

 

Right, sorry.  I got distracted reading some of the links.

I don't really have an issue with metrics as long as I understand what they're measuring.  For example, I would not ask for someone's requirements coverage percentage, then use that as a general measure of the quality of the code or project. I would use that to gauge, based on our interpretation of someone else's documented interpretation of someone else's understanding of the business functions how much of that functionality has been verified.

This is only a small part of our testing and at best, it's a high level measure of effort, not quality, but I have found it useful when I'm trying to get a general pulse of our testing progress.

Depending on the size and scope of the project, I've also been able to get the same pulse by asking one of my senior testers, "Hey, how much longer until you're done?" For whatever reason, clients are usually more satisfied when they see numbers as opposed to "Tester A said..."

I've also found that as my relationship with the client improves or becomes established, their need for metrics tends to be reduced.

I think metrics are absolutely essential but really difficult to define and measure. 

At the moment all of the metrics I handle are around our bugs, we use the total number of open bugs to guide us on the overall quality of the product, sure we might not know about all the bugs but as a rule of thumb it has been pretty accurate. 

The metric I get asked about most is bug turnaround, particularly for critical bugs people are very keen to know how long it took us to ship a fix once we discovered the bug. 

I couldn't agree more! As marketer of testing services, I can safely say what doesn't get measured doesn't get seen as a "value addition" !!
After all when you are pitching to new clients you need to be able to discuss clearly demonstrable value that you can bring to the table...you can't always say "you're doing a lousy job so give it to us" !!

Of course that's what you need to say (in effect)  but through data that shows you really can do a better job!

I think tying up any and every project metric to these basic "Winning business metrics" is a good move:

1) Saved time (time is money)

2) Saved money ( Well, money is everything)

3) Averted disaster (And saved money by doing so)

4) Directly increased business revenue ( Rare and very valuable metric)

I distinguish “project” metrics and “purpose” metrics. Purpose metric is something you collect once to prove or backup something. For example collect % of bugs returned from retesting back to development to escalate low quality of bug-fixing to management. But I never keep this metric recorded on regular basis.

So assuming you are asking about the project metrics that are collected on regular basis. I've been involved in 3 completely different projects over the last year. The number of metrics collected in those three projects were: zero, four and more than 10. Why is it so different?

If not asked I collect no metrics whatsoever (that’s why 0). It was not always this way - early in my career I’ve been collecting and analyzing either of following metrics on my own:

-          Number of open bugs (current number of bugs that are not yet closed, rejected or postponed)

-          Bugs opened weekly (number of new opened bugs during the last week)

-          Bugs closed (retested) weekly (number of new opened bugs during the last week)

-          % of test not yet executed (but planned to)

-          % of tests failed during a test cycle

-          % of bugs rejected (bug reported by tester due to mistake, such as duplicate bug). Actually this was the last metric that lost value in my eyes.

However I want to distinguish again between: test metrics and project metrics. All except the fist metrics provided a value to me (I've discovered better ways since then however) by helping to understand how we are doing in testing – how our progress is affected by bug reporting, retesting, how test failures are affecting our schedule, is it the right time to start another test cycle, etc. I would really hate anyone to make conclusions about software quality based on those metrics.

Anyway on the other project I’ve been forced to collect much more metrics, metrics that I don’t even want to list here as they seems pointless to me, such as # of (regression) test cases we “executed” during a week.

In previous companies I've mostly used metrics based around bug count in the past. Probably too many..Also run rate/ pass rate and some other weighted metrics based upon software stability. Taken together then I find it gives a good enough indicator of software quality, together with the ability to compare with historical data taken from previous products.

Burnup chart is the one metric I have found useful till now.  

RSS

Adverts

Ministry of Testing

© 2014   Created by Rosie Sherry.

Badges  |  Report an Issue  |  Terms of Service