I'm contributing to ISO/IEC 29119:2010 Software Testing which is an ISO standard on software testing process. One task within the process is to create a test policy.
I thought it would be an interesting question to throw to the software testing club.
So, what do you think a test policy should contain?
My thoughts are that its a short executive level document used by a company to describe their testing aims. A bit like a detailed mission statement for testing combined with some marketing information.
So what type of content do you think this document ought to contain? A couple of ideas I have are:
It outlines the companies long term testing goals
It outlines the risk and benefits of testing
It outlines the companies commitment to testing
It outlines key members involvement
It outlines the benefits testing can make to the company
It examines cost vs benefit.
It documents some definitive goals
Such a interesting Topic people should have added there thoughts.
I know you will answers this Thing all by your self, but even then. I would consider additionally things like below in the test policy.
What Type of Testing is conducted
What kind Of QA cycle is conducted
What Aptitude they have towards Test Tools (By this I mean there aptitude towards Open Source or Commercial Tools)
How do they mentor or Prepare there Testers
How do they allocate resources, there Developer to Test Ratio
How will be reporting to whom and how
do let us know when you have finalized your opinion.
Looking forward to hearing from you.
I think you have got a good list up there. I would like to see specific phases of testing given some focus as in Developer Testing minimum requirement etc.. Responsibility and Authority is another area that could be touched upon.
A company that employs software tester should be very clear about how much QC and how much QA testers are expected to do. Where QC is finding defects and QA is influencing the proccess to prevent bugs.
Typically junior testers are better suited to pure QC and senoir testers should be doing more and more QA.
When would this document be appropriate and what problems would it solve?
I was asking myself the same question. What problems is such a document trying to solve? Are those problems the same across all companies in a way that it can be codified into an ISO standard? Are those problems and solutions even common enough within a company for a document proposed by a standard to be useful?
I also question if there is anything standard enough -- to which people can agree -- to create a Software Testing Process ISO standard. That seems like a pretty broad topic to find agreement on standards.
Software development is mostly design work, not manufacturing work. Are there other industries that have internationally standardized testing processes for design work? If there are, I suggest we software people look there for ideas.
However, I go back to Matt's question. If the standard and the documents suggested/mandated by the standard are not being created to solve a problem; then why are they being created? What's the problem that needs solving?
Yikes, I posted this discussion so long ago, I've forgotten what the final outcome was.
The reason I contributed to this document was that one of the problems I face when rolling out a testing process is the general lack of understanding 'high-level' management have of testing and the benefit it provides. I think there is a need for a document that speaks at this high level, that justifies and even 'sells' testing to those with the purse strings.
Personally, I dont think the majority of software testers are able to market themselves very well. Testing is seen as a necessity by testers, but I dont think everyone necessarily thinks the same way!
I see this document as providing a link between test management and CIO etcs. For example, by linking test concepts to mission statements, it can easily demonstrate to a CIO the benefit that testing has at a company level.
On a broader level, I dont think that this standard will ever be the definitive guide to a software testing process. I would see it more as a set of handly guidelines which I could adopt (or not) depending on my need. But thats just me.
I think one of the key comments made about this document is "marketing". This document is the first link in the chain that sells QA/QC within an organisation be it large or small.
Any policy document must answer a question, be a living document that provides direction on both a principle and practical level. It must also set a target for those within the organisation to try to achieve.
What this basically means is that the document should start out with a statement of intent to solve (or stop) a problem. Big Hairy Audacious Goals are all well and fine. Sweeping statements such as "No Software shipped before its time" kind of thing. The challenge then is to link that to a set of ways in which the organisation can stand over that statement, e.g. No showstoppers found during testing are open. The next link is to say how the organisation can support that activity, e.g. Regular triage will make sure that we focus on solving the right problems. Statements of financial reality are important, e.g. we must ensure the ongoing financial viability of our company and may have to put out software that is not 100% ready. Contradiction it may seem, but the previous statements will support it.
I could go on, and on. The main thing is this document sells test as a service to the organisation plus provides a compass for test to steer by. Stating some of the scary stories around what happens when QA/QC isn't in place is good. Make them scary, in the organisation and outside of it.
But the one and only message anyone should get out of a test policy document is "We need to test!".
Hello all. I hope I am not too late in joining the discussion. I think this is a great question and it's one we are attempting to address right now at Cornell University.
We, Cornell Information Technologies, have very recently begun to define a QA program. I characterize our efforts as trying to get from a somewhat ad hoc approach, to more of an enterprise approach. What has been interesting to me, is how testing and QA have different meanings for different people! So for us, I think this is very useful as we are in the early stages of trying to decide what these things mean to us.
I think you're list is a good start, and some of the other suggestions are good too. One other thing I might add is a standardized list of definitions. For example, I've found words like stress testing, load testing, performance testing,and many others, may mean different things to different people.
There are some older IEEE standards documents out there:
IEEEStd610_12-1990 - IEEE Standard Glossary of Software Engineering Terminology
IEEEStd829-1983 IEEE Standard for Software Test Documentation
IEEEStd1008-1987 IEEE Standard for Software Unit Testing
IEEEStd1012-1986 IEEE Standard for Software Verification and Validation Plans
It would be great to see updates and additions to this sort of thing. I'll be keeping an eye on this thread as I'm interested in your progress. Please feel free to contact me off line if you prefer.
I may be wrong, but I think you guys are all losing the plot here.
A Mission Statement is supposed to be a statement: one sentence.
A Company or Organisation Vision should be a paragraph, best expressed as a set of bullet points following Pareto's rules: between 5 and 9 points, preferably no more than 7.
The best Mission Statement around these days is Google's company motto: Don't be evil.
If you make these things too long or complex, no-one reads them.
I believe Test Policy should be concise and I don't have anything to improve on Anne-Marie's orginal list of bullet points.
To me, the goals of testing are the same as what should be the goals of software development: to deliver a quality product, which satisfies requirements, on time and on budget. A Test Policy in my opinion should be a concise statement affirming these goals.
The Who, How and What of attaining the goals belongs in programme and/or project-specific documents.
I agree! Any company-wide test policy needs to be concise; although I think Anne-Marie's original list is not concise enough. There are so many differences in testing missions and strategy at a project level that it is difficult to define these things in any useful detail for a company as a whole.
This is why I find it counter-productive to create an ISO standard defining what goes into a test policy document. I want a short and simple policy, not something complex enough that I need an international standard to help me create my documents.
Look, international standards are great - for products. They are the reason that I can pick up any AA battery and plug it into any AA outlet - they clarify the required form factor, voltage, and tolerances for both.
For _process_, however ... not so much. *HOW* the 1.5 volts is delivered is entirely up to the manufacturer. If he *had* a standard for batteries in the early 1970's, it would have been zinc-carbon. In that case, alkaline would not have been "standards compliant", we'd never have the duracell bunny and our batteries would run out in about 1/10th the time. Oh, and forget about rechargable (ni-cad and later nimH), that's just crazy talk.
The next left of abstraction is to define the process by which those products are made. While we do have some process standards (ISO or Gmp), let's be very careful to not stifle innovation in software testing.
So to be direct: If, as a CIO, I had some *problem* for which I thought a test policy might solve, I suspect I would post the problem on a wiki page and ask my direct reports to solve it.
To quote the one-minute manager - "I don't make decisions for my people."
(PS - Credit where it's due - The seeds of battery analogy come from PeopleWare, 2nd Edition, Pg. 187. Yes, I looked it up.)
This is a kind of hijacking the thread ... but thought I would make a point here as many posts to this thread.... used words " "testing", "QC", "QA" interchangeably and in differing contexts.
Let us set the context right ... Anne --- we are talking about "testing" right ... not the "QA"
>>>> We, Cornell Information Technologies, have very recently begun to define a QA program.
Tony, I hope you wanted to say "Testing" program but ended up in saying "QA" ...
Traditionally, these two words have been used interchangeably - causing lots of confusion to all involved ... by setting wrong expectations about what to expect from Testing ....
To quote Dr Cem Kaner -- "whatever is QA ... that is not testing" ... It is my personal opinion that bringing QA, QC and linking them to testing (Testing = QA +QC) is not necessarily a useful thing to do ... since Testing is never QA ... so we are better off without bringing in "Q*" terms and call what all testers to as Testing ...
If there exists a separate function with the organization (typical of IT organization) whose main role is to "police" developers and testers so that the "standards" are followed or not .
For discussions related to "testing" (getting hands dirty - doing testing not auditing whether testing is done "properly" or not) -- How about keeping phrases like "Q*" away?
Since we are discussing about creation/definition of "Software Testing" policy (not a QA or QA/QC policy) that helps executives to understand the importance, benefits, costs, risks of doing/not doing "enough" testing ---- better be "precise" on what we would like to create the policy on?