Hi, as I'm new here I suppose it would be rude of me not to introduce myself...my name is Alastair and I'm a recovering...wait, wrong group!

Actually, I'm a 28 year old .net developer who has ended up being the sole member of my companies' software testing team somehow!

I am currently in the process of writing a short document concerning where I see the company going in terms of it's software testing process, and I got to thinking about the numbers of people that would be classed as an ideal ratio of testers to developers.

I remember reading an article (which I didn't bookmark!) which talked about Microsoft employing the equivalent of a 1:1 relationship. As my company aren't the size of Microsoft, nor have the budget I don't think that will go down well!

What are other people's thoughts, what sort of factors lead to smaller or larger test team groups?

Thanks!

Views: 1832

Reply to This

Replies to This Discussion

Ah, that old chestnut which gets an It Depends answer as it really is an Elusive number though people seem to want a formal answer

and even though Microsft may have a 1:1 ratio, some people would argue they have the wrong mix of testers
That Johanna Rothman article was the one I'd read...wonderful! I'll take a look at the others later.

Thanks.
I've also talked directly about ratios here.
Testers are developers. You may be thinking about the ratio between programmers and testers. Whoops! Except most programmers test sometimes, some programmers test a lot, and some testers write programs, and some testers write a lot of programs.

Instead of an ideal ratio of bodies with labels on them, you might choose to think in terms of how you know what you know about your products; in terms of how you come achieve that knowledge; in terms of the effort applied to studying testing; in terms of how your development process emphasizes collaboration and technical review.

Have a look at this paper, Managing the Proportion of Testers to (Other) Developers by Cem Kaner, Elisabeth Hendrickson and Jennifer Smith-Brock. http://www.kaner.com/pdfs/pnsqc_ratio_of_testers.pdf

At Microsoft, it's practically impossible to get a job in testing these days unless you're an SDET (Software Development Engineer in Test; as of this writing, 93 open reqs worldwide, compared to 0 for "Software Test Engineer"), and each open req that I've surveyed today requires C/C++ development experience and prefers candidates with a degree in computer science. I would contend the near-exclusive focus on computer science degrees represents a threat to diversity of background, and thus a threat to a more inclusive approach to testing. That is, they'll tend to find a lot of programming errors and problems that are revealed with automation tests (good), but might be biased towards missing parafunctional problems that matter (not so good).

A Microsoft employee talks about being an SDET here: http://aliabdin.wordpress.com/2007/03/31/what-does-an-sdet-do/. (As an aside, it's interesting to me that Ali (writing in 2007) relies on testing books that were more than 10 years old at the time; that mostly predated the rise of the Web, mobile technology, open source products and tools, the agile movement, etc. etc.)

---Michael B.
In case anyone is interested in more on the tangent of what SDETs do, there's a whole book about it (only a month old) that you can read.
Mine's still stuck at Amazon! I ordered it weeks ago!

---Michael B.
Alastair, Why do you want to include an ideal ratio between testers and developers/programmers in your short test process document?

If you can define the ratio, how would that information be used?

I have worked in small teams and what I think Michael is inferring is developers/programmers can/do test. So should they be included in the ratio.
Also other people 'test' without knowing they are 'testing'? (sales, marketing etc etc)
I guess I really just wanted to put a ball-park figure on paper to show that maybe I (as the test-team) maybe need some extra resource allocating to me, maybe just a few hours a week.

This is a major project for our company, money is tight and everyone's hours need to be accounted for and fit into the budget.

Thanks for your replies everyone, I'll get reading!
I guess I really just wanted to put a ball-park figure on paper to show that maybe I (as the test-team) maybe need some extra resource allocating to me, maybe just a few hours a week.

Why not base this on the work that you have to do, rather than the work (say) Microsoft has to do? Since the work that you do is almost certainly different, the same ratio would be a coincidence.

---Michael B.
Agreed 100% - there's no evidence that MS has it right (seriously)!.

I have a formula - PQ = (De + Te)
where
PQ = Product Quality
De = Development effort
Te = Test effort

If you have less testers, the developers will (generally) do more to contribute to product quality. If you have more testers, you may find that the developers to less to help product quality. As long as everyone understands what PQ is, you have a chance.
We are going through a similar exercise at the moment as testing is currently a major "bottleneck" and we are trying to work out why and what number of testers are required to improve this situation.
We went back and reviewed hours spent on the last 9 or 10 projects (by all staff) and discovered some consistencies. As a rule, new software only projects with all new code had much higher dev time than testing. Projects that were enhancements to existing products had a hugely higher testing component than development (due to lengthy regressions), projects that had a large hardware component and little software had a fairly low testing requirement. So it boils down to depending on the type of projects you do in your company. Our current ratio is 3 dev to 1 test and in most cases this is not enough (especially when some of the testers are new and not quite a full resource, and some of your developers write the bugs of 2 men) I am hoping to adjust the ratio more in the 2:1 area. It's a juggling act - you don't want to overstaff and then have to lay people off when the work dries up or you have those inevitable peaks and troughs...although we only seem to have peaks these days.
I have a question:

When companies evaluate "testing time", they tend to look at it this way:

Planning phase begins: January 1
Planning phase ends: January 31 (one month)
Development phase begins: February 1
Development phase ends: March 31 (two months)
Testing phase begins: April 1
Testing phase end (projected): April 30 (one month)
Testing phase ends (actual, due to lengthy regressions): May 30 (two months)

Is this how the development and testing times were calculated?

---Michael B.

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service