Testing vs Checking. Do you experience problems when communicating about testing?

If you haven't read Michael Bolton's series on Testing vs. Checking then take a look - the latest post, here. The rest of the series is linked from it.

I'm not agreeing with the motivation for the distinction, yet(!), and to help me (and hopefully others) evaluate the value of the distinction I'd like to start a poll - I want to see the distinction in action with a range of examples. So,

Do you experience problems communicating about your testing activities in your daily work? (Can be to either other testers or non-testers.)

If so, can you give an example.

(I'm thinking in the area where you are communicating one thing about testing and the other person doesn't seem to understand it or is responding in a way you wouldn't expect given the information. Example: See Elisabeth's post, here.)

Note: This isn't a scientific poll - I haven't researched how I should formulate the question for neutral bias (no bias intended - I'm curious and want more information to test the hypothesis.)

You may also think I'm asking the question in the wrong way. But if you can understand the point I'm after (maybe by using the example) then I've succeeded getting the intention of the question understood.

Any takers?

Views: 60

Reply to This

Replies to This Discussion

I don’t generally experience too many issues communicating testing issues with non testers.

Other than a few notable incidents when I have opened my mouth and barraged my poor victim with a high bandwidth stream of principles and hard-questions that resulted in them assuming the fetal position and mumbling “argh, wibble, meep” (OK, so I accept, this was a dumb move in an interview, I get it, really I do), I normally try to adopt the language of my audience. For example –

Project managers; engage on balancing the dimensions of time, cost and quality, and how the latter of the three can only be measured subjectively. How can that feedback be made more responsive to the needs of the project? What kind of investment in time and effort is justified when it comes to providing information about quality?

Business stakeholders, users; engage on product risk. “Well we could release now but that’d mean skipping xyz”, “Sure we could ‘test’ 100% of the requirements, but at the cost of NOT testing xyz”, “OK, so if that bit doesn’t work, we’re screwed – right?”.

Developers; engage with isolation, replication, repeatable steps seasoned liberally with dumb questions – “er, is it meant to do this?”, “what if I…”, “so what exactly does the do?”.

Testers, I find much harder. I think I want to think this part of the question through a little more...
...back to the question as it relates to communicating with other testers.

On a superficial level, communication with testers is easy. You know - talk standards, process, methodology, technique, tools, and favourite bugs.

Lately, I've been trying to dig deeper, and find ways to communicate with other testers about how they create new test ideas, their observation strategies for noticing odd behaviour, how they feel and react when they find something interesting.

In essence I've been looking for clues that I can access linguistically which will give me some indication as to whether a given individual will be a good tester.

Only I'm not getting very far. I don't have the language for it yet. Is it even possible to identify - through conversation alone, through static review if you like - whether someone possesses the vital spark that will make them an excellent tester? Or must we rely on dynamic execution to test that?

Bemused,

Iain
Hi Simon,

I don't really understand, what has the question got to do with the article on Testing Vs Checking? but I can answer the question regardless.

The short answer is constantly! I don't know if this makes me a bad communicator. Sometimes I work with people and I have no idea what they are talking about. It's as if their talking in a different language. It also sometimes works the other way, I talk and I can see that the person I am talking to has no idea what I am trying to convey.

Personally, my theory is that part of this is down to which side of the brain you use. When a person with a dominant left brain communicates with a dominant right brained person there is conflict on the way they approach a topic. The right brained person wants the big picture (perhaps the context?) and the left brained person requires logical sequence and steps.
Hi Anne-Marie,

I didn't want to give away too many reasons away up-front - as I didn't want to bias the answers around my reasoning (or the way people interpretted my reasoning... ) However, I can understand that the question may appear obscure, so:-

I have been thinking about the motivation behind the testing/checking distinction (as well as my own observations and comments.) What's the root problem - why make the distinction, is there a need for a distinction - sort of using the 5whys as a start to get to the problem (people can critisize the 5whys-approach but that's another story :) .)

Once I understand the problem (the need for the distinction) then I can determine (in my own mind) if the distinction is addressing that problem or not. I'm trying to do this open-mindedly...

I'm working on the hypothesis that the need for this distinction originates from communication problems to/from testers about testing. It might be a weakness in the communication style, how it's articulated, whether it's tailored to the receiver or something else.

This is my hypothesis (so far in the discussion - a discussion that is ongoing.) I hope this will add to the discussion. Whether you're for/against or just couldn't care less I think the way in which testers communicate (or not) is a key factor in how they experience their work, how they develop in their daily work and also how their colleagues experience the tester.

I want to "test" this hypothesis by hearing about people's experiences of communication problems - and essentially whether there would be a need for a testing/checking distinction. Those are the cases I'd like to dig into further...

My approach to this analysis is by using divergent thinking and emergent learning - maybe that's more right-side than left-sided brain activity, I don't know - but I do like context and logical steps!

The bottom line is, I'm a tester testing my hypothesis and also Michael's definition.

Someone might argue that my hypothesis is wrong or that I've totally missed the boat in some other way - that's allowed.
As long as I release test results every time a commit has been made, though there are times my error reports are considered flawed, and developers would just say that the issues I raised are ok and are "wontfix" types. But most of the time they agree without a second thought.

Anyway, IMHO, I hope testing activities, like any activity would be judged based from their output. So If I'm a tester that uses manual testing, then I'm not sure its right for me to be compared to one that utilizes sophisticated tools. Its like comparing the activities of a person using pen and paper vs a person using a computer with internet access. Process/systems changes when new tools are introduced. An ishikawa diagram can attest to the effect of varying just one of the 4m.

Unless you're on a controlled environment company that provides the same environment for all testers, then I believe your output cannot be determined by your testing activity or lack of it. Hmmmm In short there shouldn't be reason for you to communicate or explain your daily testing activity. Right? For a PRINCE2 environment perhaps. But Im not in one. Yey!

Note: id like to get trained (ojt) in one though.
Unless you're on a controlled environment company that provides the same environment for all testers, then I believe your output cannot be determined by your testing activity or lack of it. Hmmmm In short there shouldn't be reason for you to communicate or explain your daily testing activity.

I don't understand this at all. Are you suggesting that there's no link between activity and results? Or that you shouldn't be accountable for your work?

Hopelessly confused,

---Michael B.
nope, i meant you cant be called less efficient tester because you have different activity than the rest.
and yes and no on the link between activity and result. your activity for example would vary from the way others do it, but still ends up on a result.

its like cooking, some would cook rice on a rice cooker while some on a pot. different activities but same results. The rice cooker cook wont be called less efficient just because he doesnt add extra effort in monitoring the rice while cooking.

accountability would suggest you be responsible enough to get the system tested and determine its flaws, whether its ready for client use or not. if you do it manually or using tools would depend on your environment.

I believe Googles offices shows this a little clearer. You can skate around while working. as long as you get your work done.

Hi,
If anyone is arriving at this site due to an article on "Testing vs Checking" published in Automated Software Testing Magazine (Vol.3, Issue2) July2011 - then I should state:


This question that I started in 2009 was nothing to do with me wanting to have my "own mind changed" - it was an (open-minded) investigation into communication problems which were a potential reason for the "test vs check" discussion - and whether the distinction was a good/adequate way to address those communication problems.

 

For more background (context and perspective) to this line of investigation, read the blog posts I made at that time:

 

http://testers-headache.blogspot.com/2009/08/to-test-or-not-to-test...

http://testers-headache.blogspot.com/2009/09/sapient-checking.html

http://testers-headache.blogspot.com/2009/09/more-notes-on-testing-...

 

In these you'll see my hypothesis that testing is integral to checking.

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service