I was just reading Mike Talks recent blog post "Just because you can automate everything...doesn't mean you should" and it made me wonder the process that testers experience when deciding to automate something.

At what point are decisions made to automate something?

Where do automation efforts normally start?

How aware/supportive is your team of casual scripts/automation to help make testing/checking more efficient?

Regression is a common area talked about as a place to start.  Are there other common areas?  

And what barriers (and solutions) do you normally come across?

How do you deal with automation efforts that already exist but aren't working well?

It would be great to hear 'real stories' from you all :)

Views: 2519

Reply to This

Replies to This Discussion

One place I automate things is in the arena of real time responses. If I need to nail an output response to a resolution of +/- 300 microseconds I can use a scope to measure the waveform as a one off, but it can start to get tricky when there are multiple inputs which act as the driver to start your measuring period or there are multiple readings I need to make within a set time period. Here I make use of tools like LabVIEW with some Flex RIO FPGA hardware to drive inputs and measure outputs in real-time and then feed the results into National Instruments TestStand (which also acts as a test sequencing tool) to verify the results are within expected tolerances.

I'm generally quite wary of automating things, but if results are black and white defined (communication interfaces for example) I sometimes create a raft of tests to give me some breadth of testing, whilst still falling back on manual testing in the areas that have changed or just set my radar twitching.

I will often automate lengthy setup procedures just so I can start testing. I worked on a project where each test needed around 300 signals set to a certain value before proceeding on a test. To do it manually just wasn't feasible so we created automated setup scripts.

When we are getting frequent builds and in each build if we need to test the entire app, then the features which we test repeatedly should be automated.

Short but Nice Answer I agree with your answer.

At what point are decisions made to automate something?

Depends on the type of application. As far as I've experienced:

Web UI - stable UI.

API - stable specs

Desktop App UI testing - stable controls/core/UI stable enough.

Integration testing automation - Whenever the architecture of the features being integrated is defined.

Unit - One small feature or part of feature(can be just a module if meaningful) is implemented.

Where do automation efforts normally start?

Where do they start? My experience:

Usually when there's too much time for testers available and they have to do something extra.

Or, new test manager wants to automate as much as possible the testing.

Or, project/product manager wants to have regression testing on something.

Or, project manager wants every new feature to have automated tests.

Where should they start?

I'd consider first the investment done. Is it worth to automate some feature in 6 months if it will be re-written after 8-12 months? Can the automation engineer budget be maybe better used in manual testing or even for a developer that actually writes unit tests or integration tests?

How aware/supportive is your team of casual scripts/automation to help make testing/checking more efficient?

Usually I noticed that you have to be an expert with the code to get things done fast. Otherwise there's no time from the devs to teach or for you to be taught/learn, then implement. You have to pick either between testing new feature manually to be released in 2-4 days or automation of the old one. There's also no use for the scripts after running it for testing current feature.

Regression is a common area talked about as a place to start.  Are there other common areas?  

Yes, see above. 

And what barriers (and solutions) do you normally come across?

As a tester with limited experience in automation(haven't worked at least 2-5 years in automation), I find it hard to find jobs new jobs.

As a tester, I also find it harder to assume most of the testing on multiple projects/features plus legacy/client issues and regression issues, and prepare them for the release, while automation engineer gets to work on 1, with no real and impactful expectations or deadlines (just automate as much as possible) and probably even a higher salary than me.

We had an automation engineer and a "manual" tester where I worked. While the automation engineer worked on 3 projects during 2 years; the tester worked on ~100 projects/features plus all the other things a serious tester has to deal with.

Solutions unfortunately aren't at me, I just complain and everyone else agrees the situation isn't good. We try to get the automation engineer to do manual work and he's not great(consider 1/10 to 3/10 of the tester's speed). And we move on. Hope for better times and less new features/projects.

How do you deal with automation efforts that already exist but aren't working well?

I don't care. I didn't ask to have automation. I didn't build. I didn't assume the legacy code.

I don't think any of those reasons for starting to add automation are great. Do you try to push back on work for work's sake?

My biggest issue with automation is the maintenance costs. Often we use quiet times to write loads of automated tests and then later in the project, when we jave less free time, the maintenance cost bites us.

Hi Rosie -


When attempting automation, I would first determine if the automation effort is worth it. As Stephan Papusoi stated, "Is it worth to automate some feature in 6 months if it will be re-written after 8-12 months"?


Second, determine if the UI and API specifications are stable. I once wasted months writing automation tests for an application that was just too buggy to automate. Instead, I should have just started writing test cases and focused on smoke and regression tests (which would have more effective at finding bugs).


If the QA team does have bandwidth and the application is stable, then automation can be a valuable tool.  When I first start an automation project, I look for repetitive but stable tasks such as a work flow through a web site (assuming the site will not be going through major updates). For example, using Selenium Web Driver, I recently created an automation project using C# and Selenium Web Driver.

Here are the highlights of the project:

(a) Uses page object framework which allows better scalability.
(b) Uses CSV file reader and writer class to read from a csv file and then write test results to a csv file.
(c) Uses a library called Faker.dll to generate unique strings.

Project can be downloaded from the following drop box location:

C# Web Driver Automation Project

Further, if the application uses API calls, then automation of these calls (GET, PUT, POST, DELETE) could be automated using a tool like JMeter.

In sum, automation can be a valuable tool when used appropriately. However, if the application or APIs are not stable or in a state of constant flux, then automation would not be an appropriate use of QA time and resources.

Thanks for this Rosie!

I'm actually working on this right now - the response to that article has been so big I've gone "well people are interested".  It's easy to say what automation is bad for, but obviously what people would like to know is how best to use it.

Michael Bolton and James Bach have done some good work on this.  But I really feel Lisa Crispin and Janet Gregory did a decent job of this in their first Agile Testing book.  That said, I'd like to explore the concepts a little more - I think much like Asimov's 3 laws, there are three laws of automation

1. Ensure the check you want to perform can be reduced to a simple pass / fail criteria

2. Ensure the script is maintainable and reliable

3. Ensure the approach for your check is as simple as possible

Actually this could be a good place to use as a sounding board for the next few blog posts

This is how I'm planning to cover this subject

  • The iron rules of automation (pretty much mentioned above)
  • Automation as service - what do different roles want out of it
  • Zoology of automation
  • Unit testing example
  • Checking vs testing
  • API testing example
  • GUI testing example
  • Lets define some checks
  • Maintainability
  • Handling automation for data
  • Help - I'm not technical, what's my role?
  • Automation checking-manual testing symbiosis

Hi all!

There are many valid answers here :)

My five cents about what to start testing, applied to manual and automated testing. The regression test should try to cover areas where:

  • your company could loss money
  • your software could have a security issue
  • your company brand may be damaged

Apart of the regression test, all severe bugs found in Production environment should have an associated test to avoid them being reproducible.

It would be nice to read your conclusions :)

In addition to all the great answers already posted I wanted to add my thoughts on the type of automated tests.

http://googletesting.blogspot.co.uk/2010/12/test-sizes.html explains how Google talks about their types of tests. You have probably heard other names used too.

I think testers should know what levels are in use at your company and know when to recommend those types of tests should be added. For example, unit, or small, tests are easy for developers to write, are good at catching mistakes in code, and are very fast to run.

UI tests are slow to run, hard to write and maintain, and may not be good at catching to sort of issues you're experiencing. UI tests, and all tests should only be added if they're going to add value. So think about the risks of your product, the sorts of issues you're currently seeing, and then decide on the most appropriate tests.

What I see confusing in your paragraphs though is who you're addressing to: "testers should know", "think about the risks of your product..and then decide on the most appropriate tests".

For automation checks there can be(just examples of previous/similar experiences):

- CTO that asks for them; test manager says he will give it a try and they invests in hiring 10 automation engineers to experiment(no manual tester available).

- There's no test manager, but a project manager wants automated "tests" for his project. So they hire 5 automation engineers - no testers, but calls them automation "testers" that have to do test plans, test strategies, test cases, and implement automated "tests" from those.

- Country manager considers that it is too expensive to keep 10 testers and 5 automation engineers, so he fires the automation engineers and "forces" the testers to maintain and write automated "tests".

- Dev. lead or team lead wants to raise Unit tests level. So they "force" the existing devs to write more of them for existing code and take the time to write also unit tests for new code. Besides this, they "encourage" the existing testers to learn the programming language and help the developers with the unit tests.

- Project manager starts a new project with no delivery date available. He asks the test manager about his opinion on testing resources needed, because they need to plan the monthly budget. They negotiate at 2 automation engineers and 1 manual engineer. As existing resources you get assigned to this project, do you argue, do you consider risks, or just...start the work you were hired for?

The most weird scenarios can happen. You can't put the automation on the shoulders of a tester hired to do a job. You can't say he didn't analyse the level of automation needed for the project when he wasn't even asked for his opinion. Or his non asked opinion was delivered but not considered.

This is exactly what is happening in my project :(

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service