I was just reading Mike Talks recent blog post "Just because you can automate everything...doesn't mean you should" and it made me wonder the process that testers experience when deciding to automate something.

At what point are decisions made to automate something?

Where do automation efforts normally start?

How aware/supportive is your team of casual scripts/automation to help make testing/checking more efficient?

Regression is a common area talked about as a place to start.  Are there other common areas?  

And what barriers (and solutions) do you normally come across?

How do you deal with automation efforts that already exist but aren't working well?

It would be great to hear 'real stories' from you all :)

Views: 2258

Reply to This

Replies to This Discussion

I've started work on this on my blog - part 3 might be of interest ...

http://testsheepnz.blogspot.co.nz/2016/06/automation-3-automation-a...

This is an excellent topic and one experience comes to mind:

I joined a guild of testers who perceived a tonne of gaps in their end-to-end automated checks. Individuals created checks as stuff got built by each squad. The guild felt compelled to get a handle on scenarios that were "missing". We took the following steps:

  • We gathered round a whiteboard and captured any scenario
  • After about 50 or so ideas we stopped
  • We affinity grouped each scenario by High/Medium/Low importance  – ranked by how important each scenario was to the business and its customers
  • I mapped these scenarios against the existing set of end-to-end checks and shared on a wiki
  • We called them "The Top 50 Scenarios"

It was a good starting point and helped the guild and rest of the development team feel good about their checks. The guild had focus and within about three weeks the gaps were filled. I think the Top 50 Scenarios still exist. 

I do a lot of system or complete-system testing where I work. Today the thing that helps me most when deciding what to automate in order of importance are:

- use my experience to determine how deeply to test (maintenance cost, effort required, customer impact) . To this end we created a simple formula with weightings to help us prioritize what to automate first. When we run out of time we stop implementing new tests. The formula weights 2 things highest, how much time does it waste manually testing, how likely is a customer to get *###%@! with us if it fails.

- will automating to validate this behavior be do-able before we ship, if not and the code is unlikely to change, we may miss the boat and end up with a great regression test only. In which case automate it later. automating anything when the interface is being built is another risk, but automating early has huge benefits for the pipeline and the product design and performance verification if it is achieved. So think about how late or early you need to be.

- Lastly, think about all of your automation as a whole. These days a lot of system level automation gets done because it is sometimes easy to create a few cases and get really great coverage, but at high triage cost. Would creating some cheaper unit/whitebox testing that runs faster be a better spend of our time and less fragile.

TestsheepNZ you have lined up a great argument and at a high level what you are saying is brilliant. When it comes to detail, the hidden costs creep in - and deciding how your team will work as a whole unit becomes more important, can the developer switch roles with the automation-tester, can the manual tester look after the automation system for a few days?  If the answer is not without lots of training, you have a high cost and possibly fragile solution. A large number of the detractors of automated testing will call us out on this cost area, and regularly stepping back becomes critical. That for me means my automation must be easy and simple. and most of all not fragile.

Here are a few principles that I use:

- Customer analytics, what features do customers use?  One system had 150 features, but analyzing the usage logs showed that 93% of all usage was in just 3 features.  We started automation on those 3.  

- Test areas with many variants.  For example, a billing system with many permutations of discounts, free trial period, renewal frequency, subscription level, etc.  Parts of the product with many data variations are useful for data-driven testing to save time and increase coverage.

- Sign up, sign in, and money transactions.  Access to the system is foundational, without access you don't have a product. Money (shopping cart, subscription signup, etc.) inherently has a lot of risk, and thus will benefit with additional test coverage.

- Available APIs.  API driven tests are generally more durable than UI driven tests.  On going maintenance on tests is too often a killer, so having durable tests is beneficial. 

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service