Automated Scripts are Evil Teddy Bears

When you use scripted test automation to verify an application, the requirement is not being verified. Instead, the tool is verifying a specific parameter at a specific interface point (database table, server application interface point for BDD tests, screen objects for GUI-based automation tools). I have even written a paper proposing an intermediate abstraction layer between tests and requirements (June 2011 edition of Testing Experience magazine). That was intended to emphasize the impact of implementations in tests - in practical terms it may not be possible to implement in the field. When I think this through, it amazes me that scripted automation is used to mark any functional requirement as “passed” through automated links with no human tester validation.

That leads me to the conclusion that the purpose of most scripted test automation in software development today is to allay the fears of testers, developers, and managers that something was missed. Reporting that a suite of scripts was run and that “all tests passed” is intended to allow everyone to go home and have a good night sleep (until, of course, it is deployed and a major flaw is discovered that was missed in testing).

Perhaps a better response to the above pronouncement should be “So?”. You have probably heard my view in previous discussion comments that automated scripts should be viewed as “trip wires” instead of tests. As a result, I kind of like the distinction Michael Bolton makes between tests and checks where all automated scripts really perform checks.

Scripted automation should instead be viewed for what it really is: a trickster. It is the dog that starts digging holes in your flower bed as soon as you turn your back. It is the advertisement that lures you into the store only to find that “limited quantities” is really the 2 items in the display. It is the paper-thin veneer of paint on your house that may be covering the rotted wood underneath.

I have learned to know and clearly publish the limits of every script in my arsenal. Before starting a script development effort, the purpose, implementation, and validation of the automation run should be clarified. If it turns out that there is no way to validate if an automated script is providing valid results, it should be rejected. If the time it takes for a script to run during a time-boxed smoke test is better spent with other scripts that will uncover more critical defects, then it should be archived. This leads to automation script developers that are always out of time, sitting on a long backlog, trying to implement modules only to have them rejected at the last minute. It is not a profession for the easily discouraged.

Automated scripts are necessary for testing. But if you start hearing phrases associated with them like “enjoy scripting”, “love the ease of running”, “comfortable with the results” then it is time to take a step back, turn on your best jaded-untrustful-suspicious-peircing-judgemental look, and ask when is the last time the scripter validated the results.

Oh ... and have a good night sleep.

Views: 103

Add a Comment

You need to be a member of Software Testing Club - An Online Software Testing Community to add comments!

Join Software Testing Club - An Online Software Testing Community

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service