I was just reading Mike Talks recent blog post "Just because you can automate everything...doesn't mean you should" and it made me wonder the process that testers experience when deciding to automate something.

At what point are decisions made to automate something?

Where do automation efforts normally start?

How aware/supportive is your team of casual scripts/automation to help make testing/checking more efficient?

Regression is a common area talked about as a place to start.  Are there other common areas?  

And what barriers (and solutions) do you normally come across?

How do you deal with automation efforts that already exist but aren't working well?

It would be great to hear 'real stories' from you all :)

Views: 2551

Reply to This

Replies to This Discussion

I've started work on this on my blog - part 3 might be of interest ...

http://testsheepnz.blogspot.co.nz/2016/06/automation-3-automation-a...

This is an excellent topic and one experience comes to mind:

I joined a guild of testers who perceived a tonne of gaps in their end-to-end automated checks. Individuals created checks as stuff got built by each squad. The guild felt compelled to get a handle on scenarios that were "missing". We took the following steps:

  • We gathered round a whiteboard and captured any scenario
  • After about 50 or so ideas we stopped
  • We affinity grouped each scenario by High/Medium/Low importance  – ranked by how important each scenario was to the business and its customers
  • I mapped these scenarios against the existing set of end-to-end checks and shared on a wiki
  • We called them "The Top 50 Scenarios"

It was a good starting point and helped the guild and rest of the development team feel good about their checks. The guild had focus and within about three weeks the gaps were filled. I think the Top 50 Scenarios still exist. 

I do a lot of system or complete-system testing where I work. Today the thing that helps me most when deciding what to automate in order of importance are:

- use my experience to determine how deeply to test (maintenance cost, effort required, customer impact) . To this end we created a simple formula with weightings to help us prioritize what to automate first. When we run out of time we stop implementing new tests. The formula weights 2 things highest, how much time does it waste manually testing, how likely is a customer to get *###%@! with us if it fails.

- will automating to validate this behavior be do-able before we ship, if not and the code is unlikely to change, we may miss the boat and end up with a great regression test only. In which case automate it later. automating anything when the interface is being built is another risk, but automating early has huge benefits for the pipeline and the product design and performance verification if it is achieved. So think about how late or early you need to be.

- Lastly, think about all of your automation as a whole. These days a lot of system level automation gets done because it is sometimes easy to create a few cases and get really great coverage, but at high triage cost. Would creating some cheaper unit/whitebox testing that runs faster be a better spend of our time and less fragile.

TestsheepNZ you have lined up a great argument and at a high level what you are saying is brilliant. When it comes to detail, the hidden costs creep in - and deciding how your team will work as a whole unit becomes more important, can the developer switch roles with the automation-tester, can the manual tester look after the automation system for a few days?  If the answer is not without lots of training, you have a high cost and possibly fragile solution. A large number of the detractors of automated testing will call us out on this cost area, and regularly stepping back becomes critical. That for me means my automation must be easy and simple. and most of all not fragile.

Here are a few principles that I use:

- Customer analytics, what features do customers use?  One system had 150 features, but analyzing the usage logs showed that 93% of all usage was in just 3 features.  We started automation on those 3.  

- Test areas with many variants.  For example, a billing system with many permutations of discounts, free trial period, renewal frequency, subscription level, etc.  Parts of the product with many data variations are useful for data-driven testing to save time and increase coverage.

- Sign up, sign in, and money transactions.  Access to the system is foundational, without access you don't have a product. Money (shopping cart, subscription signup, etc.) inherently has a lot of risk, and thus will benefit with additional test coverage.

- Available APIs.  API driven tests are generally more durable than UI driven tests.  On going maintenance on tests is too often a killer, so having durable tests is beneficial. 

Hello.

Just been reading through this thread and there are some excellent points. Though, John Ruberto puts the main points I follow in a very succinct way.

Especially, the points around automating API and looking for variants that you can test with data-driven tests. I'm a big fan of data-driven tests as once you a have working test for a simple case, you can easily refactor to call the test for every row in a spreadsheet. This way you can quickly cover an awful lot of test cases very easily.

This could almost be the same reply as the 'Do we need to talk about automation' thread!

When I started at my current company. It became apparent quite early on that 'testing' activities were not adding much value. Because the objective for testing was all wrong. The 'Powers that be' were only concerned with viewing the little green ticks in Quality Center and not at all concerned (or aware) of the lack of reliability that that solution provided. A single, functional check against a requirement does absolutely nothing to prove that an application under development is a) as free of defects as possible. b) Fit for purpose. c) Of a high quality standard. There needs to be other practices carried out in order to build that bigger picture. The first thing we knew we should definitely move to automate was those single functional checks against the requirements. They were pretty much all designated as user interface performance tests. Because our application under development manages a system from conception to completion we also knew that running our AUD automatically, end to end, would also be beneficial to seeing how it handles constant, real world use. It's a long and tiring process performing this manually, so having another tester that is never going to tire. Never going to complain. Never going to make a mistake (unless it's programmed wrong by a human) Never going to phone in sick, was of great value.

Most automation tools have both a record and play-back feature and the means to use your own code. Starting our learning of the automation tool with recording and playing back taught us a great deal. It didn't just make us become so much more intimate with the AUD from scrutinising those fine performance details. Over and over. Recording and watching. Again and again. When we discovered that not everything works with a simple record and play-back. We had to work with the developers to understand why. That introduced us to automation ID's. And has resulted in us having to learn bits of code to force things along a bit. And at first I was a bit resentful of this. I don't want to be a programmer. But I've learned it for the sake of automation. And it's a lot of fun.  

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service