There’s been a lot of talk in the testing community about ‘Best Practice’. From my perspective the conversation goes along the lines of how best practice is usually something promoted by tools vendors or consulting companies. That, in practice, there is no one size fits all best practice so anyone touting best practice is something to be wary of.
I can understand the argument. Vendors selling tools that work a certain way, will of course (need to) say that it does it that way, because it’s the best way of doing it. The steps, processes, data, reporting and whatever else the tool does is designed to represent best practice. It’s hard for vendors not to say this, what’s the alternative? They have a tool that’s pretty nifty but isn’t based on insightful research or unique experience, but will do the job, just not perhaps in a way that gives you an edge. They’re not likely to say that are they.
Likewise, consulting companies need a unique message, they generally want to say they also have insight and experience that no one else has, and so their way of doing things is the best of current practice. It gives them the air of success, of being the go-to crew for solving your testing problems.
But there’s a problem especially in our profession - the problems that we try to solve with these tools or consulting approaches are not static, but the ‘best practice’ they publish, promote and enforce is. Static, codified, best practice can’t take into account the unique context of the testing problems we face project to project, day to day.
The only practical way to address the change and unexpected circumstances we encounter on projects is to adapt the best practice. That means making a context sensitive application of the practices used. Now in doing so we acknowledge there’s not really a single definition of best practice. That there is only what we've decided to call at Test Hats (the consultancy I own and work in) - Best in-Context Practices.
Best in-Context Practices
Let’s briefly explore what best in-context practices look like in the wild. Every time we engage in a project we discuss with the Client their specific needs for their project. Sure, they come to us for testing services and consultancy, but the question is what do they need those services for? Typical things we need to consider include; what is it that needs testing, what constraints or risks is the project under, how confident are they about the level of quality of the development?
Do these seem obvious? They are. Consider the static and enforced nature of best practices and how they would address the above considerations. In summary they can’t, it's mostly just interesting 'project management' stuff yet it directly impacts how we approach testing. So, we can see that best practice is at best reduced to a framework around which the real best in-context practice is developed. The tools become just what they are, tools that form an element of the best in-context practices that are shaped and agreed upon to meet the unique needs of the project.
Best practice as it is often thought of doesn’t exist, context is everything and rightly so. The professional testing consultancy (and competent individual professional) combines proven practices, techniques and approaches, blended with their experience, utilising vendor tools, all in the context of the unique needs of the Client - to produce an approach that is by definition, “best in-context”.
Add a Comment