I'm not an expert on Automation, but I've been around a long time to see the discussions and debates.
This is currently expressed in Chris McMahon's latest blog: Reviewing "Context Driven Approach to Automation in Testing
My question is, should we, as a community be talking about this? Is there any way we can make progress on this idea/theme of test automation without stressful arguments? How can we communicate our experiences and ideas in a helpful way to help move our software testing craft forward?
(Note/edit: Alister Scott has also added to the conversation).
I think we as a community need to talk about this subject, but from an non-emotional and opinionated level (good luck with that though as the Testing community is very emotional and opinionated, and I plead guilty to a degree). The whole argument of semantics of terminology is a key factor that needs to be removed. I don't care about the nit-picking of "tests" versus "checks", it digresses from what we should be discussing which is how best to approach a problem from a technical aspect. It digresses from how best to resolve the continuing problem(s) of myths and misconceptions we have to correct with unknowing/inexperienced people in relation to this work. We need to focus on how best to do the work and provide a solid resuable and maintainable solution, and getting proper buy-in by other groups in order to have some chance for success.
Just for arguments sake Websters dictionary definition of "Test" - a critical examination, observation, or evaluation : the procedure of submitting a statement to such conditions or operations as will lead to its proof or disproof or to its acceptance or rejection <a test of a statistical hypothesis> : a basis for evaluation.
Synonyms for "Test" are words like "Inspection" which have definitions of "a checking or testing of an individual against established standards", or "Checkup" with a definition of "an instance of looking at the parts of a machine to make sure it is working properly".
The word "Check" itself is defined by Webster's as: a standard for testing and evaluation, examination, or inspection / investigation, the act of testing or verifying.
So according to some of the people arguing this whole thing of an automated "Check" is "verifying" something. Thus if it is doing this it is a form of a "Test". In other words a "Rose by another name". And in my opinion (based on my background in Zoology and Taxonomy) I don't care if it is a Lithobates pipiens or a Lithobates blairi, they are both frogs. Why split hairs, because most people outside of our realm will be confused about the argument of differentiation of terms and won't care. So don't muddy the waters.
And off to the races now.
Jim, I agree that we should be discussing "how best to approach a problem from a technical aspect". I would add though that this is only part what we need to discuss.
I want to know how, in a given situation with all of its relevant context, we choose the optimal mix of manual and automated testing to provide the value our customers require, within the cost/time they can afford.
What questions should we ask? What expertise do we need to have available to ask intelligent questions and understand the opportunities?
The technical opportunities and risks in that particular software/platform must certainly be part of the overall strategic discussion. So must the customers' quality requirements (primarily business risk) and budget. Plus (off the top of my head),
With current technology, there are some things manual testing is better for, and others where automation is best. Let's explore and test those boundaries.
I almost forgot to say what I tweeted this morning. Let's (please!) not start quibbling about the terms "manual" and "automated" testing. Or I will never regain the will to live!
The short version: Yes.
The longer answer: I think no matter who we are or where we're coming from, there are going to be differences of opinion. I'd hope as a community we can be mature enough to discuss those differences without the discussion becoming a flame war - but knowing how invested a lot of people are in their views, that's always going to be a challenge.
The original article would probably make a decent argument to use as evidence to a Speaker to Management type: for a great many workplaces, "tester" does mean "any warm body capable of following a script" - and there are many people who consider themselves experienced testers and have never worked in an environment that wasn't an "any warm body" environment. I recently had an unfortunate experience with someone from that kind of environment who was unable to adapt to the rather more dynamic environment I work with. I'm certain this person would be extremely valuable in a highly structured situation where every scenario was covered by detailed test scripts, but one where it's necessary to create the test oracles yourself through exploration was not a good fit.
The message I have been giving to my seniors at my workplace has been that test automation at all levels of the software development stack will help to prevent regression issues and help to keep new code clean and well-designed. I've also been evangelizing the living daylights out of automated deployment processes and other forms of tool use to help eliminate pain points in our internal processes.
Even though my manager agrees, there is still limited time, limited budget, and only one of me. Things happen when I get time to make them happen. I'm certain that's a common scenario - in a small team it's a higher priority to get things done than to fix sub-optimal processes, especially when there's nothing budgeted for improvements.
So yes. Let's have the discussion (again) and try to get the word out past our community - because really, outside the relatively small group of interested test professionals, there doesn't seem to be much awareness of anything beyond the old paradigms.
Chris seems to have an axe to grind, esp. related to James B. There also seems to be a lot of resentment towards RST and Michael B.
Chris has valid points in this blog post, esp., regarding Case 3 in the article. However, he dismisses the rest with one word/acronym - FUD. The rest of the article is very good, usual Bach/Bolton writing/thinking. By ignoring the rest of the article and the added vendetta towards Bach/Bolton/RST, I am not sure why anyone would take the blog post seriously.
Also not sure why no one objects to his vilifying Michael, who seems blame less.
I also find that the CDT community has embraced a lot of agile and automation. Sometimes I feel a bit too much. However, I don't see the agile and automation followers showing the slightest interest in CDT, other than lip service. I don't think there is any conspiracy. They just don't get it.
More on automation later....
Automation has been a hot topic for long time. The success of it depends of the testing process maturity in the team/company. Our testing team is going through transition and we are looking for the ideas how to improve our Automation process. For the moment we use automation only for the quick regression tests. They are quite simple and straight forward and they do not bring as much benefit as we would want to. I am keen to expand my skills in Automation and that's my plan for the near future.
I find the language of “test” and “check” useful to highlight the limitations of automation to senior project stakeholders. Effective automation should arrive on the tail end of human based “blood, toil, tears, and sweat” software testing. If applied proportionately, automation subcontracts the daily drudge of repeatable testing with confidence (because robots work efficiently, don’t get bored, never lose concentration, and playback each check faithfully). Once senior management understand that robots are programmed by humans and scripts merely perform dumb, binary right / wrong checks without any understanding of the underlying product under test, management start to value human based software testing more and, importantly, understand the trap of automation overreliance.
I was acting as a QA Lead supervising five offshore manual testers. It took two weeks before a Production Release to perform regression testing to check how everything was on a few different iPads, a few different iPhones, and a few Android devices, comparing the results with the web app.
If only I had Jenkins and Appium back then! We could have offloaded the same-ole-same-ole regression test suite, and focused on the fun new features!
Automation, in regards to browser testing, enhances my goal of being an end-user advocate.
What I think we need to do is talk about how we can best remove the misconceptions surrounding test automation.
Most people who actually do test automation will tell you that they don't see test automation replacing testers. I work as a testing consultant, and if I had a nickel for every client who tells me that they want to automate testing in order to reduce the number of testers they need I'd have a heck of a lot of nickels :) I hear that argument *all the time*.
I think a lot of the ire that's raised when people talk about *test automation* comes from exactly that misconception. As professional testers we know that what we do can't be done by automation - and more to the point, those of us in the community who are test automators know that test automation can't do it all. But that misconception is out there, it's pretty pervasive, and it's natural that it would create a backlash among people who have been fighting for years to gain respect for what testers do.
So if we are all agreed, broadly, that a strategy of "all automation all the time" isn't desirable - including those who focus on it - then why does the misconception exist? It's not a strawman invented by the testing community (or even the sizable part of it that reacts negatively to calling automated testing "testing") - it is a very real concern that we can't counter by infighting about the definition among ourselves.
we should IMHO. Agile software development methods want to automate everything but the promises automation is giving often fail. The most facts Dorothy Graham / Mark fewster written in their book from 1998 "Software Test Automation" are also 2016 true.
There is also a problem that most people think web application test automation means Selenium but Selenium has many issues which makes it horrible for testers and developers to automated their tests with it. We have to think about the purposes of test automation and which lessons we have learned in the last decades. "Every fool with a tool is just a fool" :-)
I couldn't agree more that Selenium is painful to work with!
That's why I developed my own automated testing tool.
My tool only tests the server-sided functions of your web applications, but it does that much more streamlined than Selenium.
Give it a go, you'll be running your own custom test suite in no time at all and achieve a high test coverage fairly quickly.
Your feedback is very much appreciated :-)
There are some good replies to this, and I would like to add a different slant.
We ought to be talking about Automation as a means to an end, and not the end itself.
We are guilty of looking at Automation as a panacea for all our problems. Having a set of automated tests to validate that code has not been broken by the latest build is of course going to save time, BUT an automated test is only as good as the person who wrote that test condition in the first place.
There is so much focus on technical skills and automation (look at job specs - must have Selenium, C#, Java etc etc, with no mention of anything else), however if someone can write code but not test conditions then they are effectively acting as a developer and not a tester.
Automated testing is a noble goal, but should be seen as part of a tester's skillset (not their ONLY required skill), alongside the ability to write good test conditions, and provide the insight that testers can bring to a team. Sadly this doesn't seem to be the case, and we need to strike the right balance and develop good all round testers.