Pros and cons of different automation frameworks

I've been thinking lately about the different automation frameworks that folks discuss on the various blogs.  But I've never seen anyone discuss the pros and cons of the different ones.

I'm not talking about Selenium vs. QTP.  Those are the tools/engines that a framework is built around.

I'm thinking more along the lines of what the automation engineer builds using the tool that the testers then use to actually build their tests with.

Included in this discussion would be things like how does the automation engineer "document" (word used loosely) what is available to the testers who build the actual tests.  Let's call it the 'interface' into the framework.

I have my own ideas on the pros and cons of different frameworks and this 'interface'; but, I don't want to muddy the pot and see what the community has to say.

Views: 6133

Reply to This

Replies to This Discussion

Go hybrid.

Considering "traditional" automation, I'd suggest a combination of Data- and Keyword-driven plus Functional Decomposition. A database repository (as opposed to discrete files/scripts/spreadsheets) and user-interface (to allow non-automators to participate) are nice-to-haves, too.
Hi there, I'm the test automation manager for the Equities department of a large investment bank. We have built three frameworks to cover most of the test automation requirements, two are strictly testing, one is ancillary...

The first is a functional testing IDE, a GUI which allows test scripts to be written in the Groovy scripting language, with a whole library of functions for common activities (common to banking order management flows). The scripts are written and run in the tool which provides test script persistence and versioning in a common backing store, as well as test results capture... The tool has a simple plugin framework also that allows dev teams to extend the tools to cover bespoke requirements with functions automatically added to the scripting function set. The editor provides code completion, syntax highlighting, code folding etc. The tool also runs as a command line interface allowing integration into continuous integration and cron...

The second is a performance testing framework which self-deploys across a Linux hardware estate and allows remote task invocation and automatic data loading and web presentation of metrics and statistics from any remote tasks which can generate CSV output

The third is a generic latency measurement visualisation tool which can take any data in a standard 'common latency format' and store it in a kdb database with a nifty web front end that allows dynamic zooming into data...

The last is not strictly a testing tool, but has turned out to be a generally useful tool as part of performance and capacity testing...

Regards,
Simon
I should also have noted that we keep a very detailed wiki on all the tools providing access to e.g. scripting function documentation generated at build time from the tool code. This is accessible from within the tools as a help function

Si

"Frameworks..."

Okay, here is one for Android only, but I think it shows a pretty fair comparison between Robotium and uiautomator:

http://testdroid.com/testdroid/4684/the-pros-and-cons-of-different-...

Damian has a good point - hybrid helps a lot. Also the kind of thing Simon refers to, with libraries for commonly used functions.

My preference - which I use for complex applications - is something like this:

Tests and data stored in a database repository with a front-end interface (I'm in the process of making this available via CodePlex - it's heavily Microsoft because my workplace is Microsoft-based. SQL Server 2008 database tightly integrated with ASP.NET MVC web site).

I'm partway through building the automation application to consume this - ultimately I plan to make it available open source as well. The goal is that it will be possible for people who want to use this kind of structure to plug in their preferred tool for their specific application handling.

The basic structure I like to use is one where each component in the application being tested is referenced in scripts in exactly one place. I've gone one layer further with the data management app and put that one place in the database rather than in script code. Ultimately, the goal is that if the kind of test is supported by the script application, anyone can add new tests.

The last place I worked we were close to this using TestComplete. At my current workplace, it's all Microsoft, so I'm working with C# and the CodedUI framework.

My personal view is that most tools operate on the assumption that most tests are going to be able to be self-contained, which causes a ridiculous amount of repetition in any large, complex application. It makes more sense to break them up and string smaller items together by passing parameters to them - but too many commercial test tools either don't allow this or make it unnecessarily difficult.

I appreciate everyone's input. It has been informative on how folks have structured their various frameworks.  But so far I have just seen a list of "here's what I do" and not much discussion on under x circumstance approach x works better and under y circumstance approach y works better.

<not much discussion on under x circumstance approach x works better>

The problem is that "x" is a very, very long list of things.

 

While I could describe "x circumstance and x approach" to you, unless you have exactly (or mostly) "x circumstance", then considering "x approach" it is mostly meaningless.

 

At one company, "x circumstance" was (in part):

1) the airline industry with many regulatory requirements

2) a 4-to-1 dev to qa ratio

3) a 3-to-1 manual tester to automated tester ratio

4) expert automated tester skillset

5) largely autonomous automation team

6) 30K+ manual regression tests

7) automation for regression testing only (expense activity)

8) waterfall development methodology

9) bad requirements (and even worse manual tests) as input

10) client-server and web-based applications

11) apps developed using a variety of languages and technologies

12) many third-party controls

13) oracle backend that could not be accessed directly

 

That is a very incomplete list of "circumstances". Truly, there are many, many other things that needed to be considered before determining the best automation "approach". Of course, I could continue and describe the "approach", but unless your circumstances exactly (or mostly) match the one above, it probably won't be of much help.

 

That said, I am more than willing to give my opinion on an "approach" given a "circumstance".

 

@Kate: "Wicked problem" - Interesting. Thanks for turning me onto something new.

Hi, Allen,

It's actually rather difficult to quantify something like "Under X circumstances Y approach works better than Z approach" for a number of reasons.

- Those like me who are working with large, complex business-to-business software rarely see any other scenarios so we tend to structure our frameworks to suit the specific needs of our environment
- An approach that would otherwise be ideal can be ruled out by non-technological constraints (available people, the internal dynamics of an organization, budget, time constraints, you name it)
- As a general rule, the more powerful and effective a test automation tool is, the more coding experience is required to use it successfully - so the harder it is to find people with the necessary skills to maintain and extend it
- Most mature test automation efforts I know of use some variety of hybrid approach, usually tailored to the needs of the organization and application(s) in test. I've said this elsewhere but it bears repeating here: the kind of automation and testing required for business-facing software is very different from the kind of automation and testing required for consumer-facing software. Regulatory requirements add their own complexities to automation (Simon would undoubtedly recognize this) and the approach typically has to be customized for the regulation in question.
- In my experience there really is little difference between frameworks and approaches. Anything can be effective if it has the expertise and support it needs, and anything can be horrible if it lacks expertise or support.
- Operational constraints can make some approaches impossible: in my current environment it is not possible to begin each test with a fully known data environment - but the circumstances around my environment mean that the best way for me to adapt to this is not necessarily the best way for someone else with the same constraint.

The short version is that test automation design is, like software design, so much a "wicked problem" that it's almost impossible to say "in this situation, that kind of testing works best".

(More information about wicked problems, from Wikipedia (http://en.wikipedia.org/wiki/Wicked_problem):

1.The problem is not understood until after the formulation of a solution.
2.Wicked problems have no stopping rule.
3.Solutions to wicked problems are not right or wrong.
4.Every wicked problem is essentially novel and unique.
5.Every solution to a wicked problem is a 'one shot operation.'
6.Wicked problems have no given alternative solutions.
)

I am still relatively unclear about what your definition of framework is.

It appears not to be a method or means to get to the end, as many people have described how they do testing.

I have worked for six companies in the capacity of arranging software test automation, and in each company we did it differently.

  • Desktop Educational Software written in Java: Automation was internal to the application using the Java Robot APIs.
  • Networked Home Automation System written using Linux, C#, ASP.NET: Automation was a lot of macros using telnet as a transport device in order to interact with remote systems.
  • Automotive Lead/Advertising Website written using C# and ASP.NET: A lot of HTTP-based front-end automation, built on a custom continuous integration server that read from a database what test cases to execute, updating daily, and some custom built load tests (C# mostly)
  • Multiple PHP websites backed by MySQL and Oracle: A lot of HTTP-base front-end automation.
  • eCommerce prduct and custom administration portal, per client, written using C# and ASP.NET: A lot of Selenium, Watir, and HTTP-base front-end automation, some C# SQL testing automation.
  • Very large dynamic system written mostly in Java, with a lot of small web services, backed by Oracle, Oracle ERP, a lot of different teams: Various types of automation - Selenium in Java for front-end testing, unit-test distributed back-end/web service automation, a myriad of database automated tests - every team is different, every project is different, every piece of test automation is different - not much unity except where it absolutely matters, but we do a great job at automation

To me, none of these scenarios falls into your descriptions and ideas of what a framework is.

So what is a framework?

There are at least 100s of ways to conquer automation for various projects - tools sometimes dictate how we do things, and lack of tools might dictate how we do things differently.

In my experience also, I have never worked in a company really where we had one group designing a system or API for another group to write tests on - I suppose that I've written libraries that we all have used later to write our automation, but we've all always written code. No simplified framework such as one that takes an XML script has ever worked very well.

Are libraries what you mean by a framework?

Isn't this still like Selenium, or Watir, or QTP, just another layer of abstraction more specific to your company's needs for automation?

A lot of folks when they first get into automating tests they will use a tool like Selenium, Watir, or QTP and write their test cases.  They may end up with hundreds of these.  Usually this first pass will not modularize anything so if you have to log into the app every test case has all of the steps to log-in.  Of course, this approach is the most fragile.  One change in the log-in process and all of the tests fail.  Unless you are manually running these tests the "framework" would be whatever is setup to select which tests to run and then, at the proper time, start up the tool (Selenium or whatever) and tell the tool to execute the test case.

So the next step is to break out functions liek the log-in into it's own test that accepts arguments for the log-in criteria and all of you test cases call this test function.  Now, if the log-in process changes, you fix one test function and all of your test cases pass.  This is functional decomposition.   As you expand this concept you have all of these test functions where each does one thing.  Some may require data passed to them and some don't. 

At the very lowest level these test functions could be specifying what you can do to an object on a given screen.  For instance for a button you could have functions like Click, ValidateEnabled, ValidateText.  Now the test case doesn't need to know how to read the text label off the button so if one developer has it on the 'caption' property but another has it on a 'text' property it won't matter.

Now your test cases call all of these test functions.

How this usually works is you will have written a "driver" that could be written in the language of the tool or it could be written in some scripting language.  In any case it reads your test cases, pulls the data from the data source specified, and passes it to the test function, then then handles pass/fail reporting.  This driver and all of the support (data storage, error reporting, etc) constitute the "framework".

Your Desktop Educational Software sounds like an example except that the test functions were part of the application itself and you had an external Java Robot calling a list APIs to perform all of the internal tests for this test case.

In some cases where you have some folks writing these test functions and other non-automation testers building and executing the test cases.  They need a way to know what functions are available, what data may need to be passed in, etc.  Folks have solved this a number of different ways from creating wiki pages listing the functions to building a front end tied to a database of all of the available functions that will build/edit the test case as the user selects what they want to do.  The later is very expensive but if you have an app with hundreds of screens (or API interfaces) that might be the way to quickly be able to build the thousands of test cases that would be needed.

Thanks for clarifying Allen.

That explains what I was asking for very well.

And regarding documentation, in the past I have tried a few methods of documenting APIs, and the most valuable method for me in most cases has been to use documentation tags - .NET has doc tags, and Java uses a superset of ScriptDoc tags, then I generate API documentation using a tool.

JavaDocs and MSDN-like docs can be updated at build time automatically and placed in a central location.

I have also use ScriptDoc with PHP, and I suppose there are similar documentation standards for other programming languages.

Hi, Allen,

This is exactly the evolution test automation ran through at my previous employer. When I started there, the first iteration of the driver with data source for defined tests was being implemented - using CSV files with defined structure.

That evolved into two relational databases maintained in CSV files, one for the test drivers and one for the application data they used. The framework itself was remarkably tolerant of application changes, but poorly documented and rather opaque to new users. Since there was never enough time to maintain the script framework, you can imagine what happened to documenting the framework.

The method used there was to use batch files to copy the test driver data set into the expected locations before starting the automation tool. A machinename.ini file in a set location in the directory structure defined the test suite the data set ran for, as well as other machine-specific options (we were running more than a dozen script systems at this point) would be read into the tool once it started, then the test run file would be read in and the fun began.

The structure looked a bit like this (from memory, since I haven't worked there in 9 months):

XXXTests.csv

TestNumber, Description

1, "Run the application under test and logon as the sysadmin user"

2, "Do basic configuration for the system"

 

XXXTestSteps.csv

LineNumber, TestNumber, TestType, DataID, Description

1, 1, 1, 1, "Run the application"

2, 1, 2, 1, "Logon as user XXX"

3, 2, 3, 1, "Perform config option 1 - turn on database synch"

4, 2, 3, 2, "perform config option 2 - set the application printer to use the windows default"

 

and so forth... The combination of test type and data ID determined which CSV file and line number would be accessed to read details for the test step - in the script code there was a case statement which would pass control to the appropriate routine for the test type, and that routine would read the data it needed.

 

The whole system worked very well - it was testing a suite of applications with over 3 million lines of code (by now it's probably topped 4 million) and several hundred configuration flags. The biggest problem with it was that it was extremely opaque to new users, and no-one had the time to document it.

RSS

Adverts

© 2017   Created by Rosie Sherry.   Powered by

Badges  |  Report an Issue  |  Terms of Service