I've been thinking lately about the different automation frameworks that folks discuss on the various blogs. But I've never seen anyone discuss the pros and cons of the different ones.
I'm not talking about Selenium vs. QTP. Those are the tools/engines that a framework is built around.
I'm thinking more along the lines of what the automation engineer builds using the tool that the testers then use to actually build their tests with.
Included in this discussion would be things like how does the automation engineer "document" (word used loosely) what is available to the testers who build the actual tests. Let's call it the 'interface' into the framework.
I have my own ideas on the pros and cons of different frameworks and this 'interface'; but, I don't want to muddy the pot and see what the community has to say.
Okay, here is one for Android only, but I think it shows a pretty fair comparison between Robotium and uiautomator:
I appreciate everyone's input. It has been informative on how folks have structured their various frameworks. But so far I have just seen a list of "here's what I do" and not much discussion on under x circumstance approach x works better and under y circumstance approach y works better.
<not much discussion on under x circumstance approach x works better>
The problem is that "x" is a very, very long list of things.
While I could describe "x circumstance and x approach" to you, unless you have exactly (or mostly) "x circumstance", then considering "x approach" it is mostly meaningless.
At one company, "x circumstance" was (in part):
1) the airline industry with many regulatory requirements
2) a 4-to-1 dev to qa ratio
3) a 3-to-1 manual tester to automated tester ratio
4) expert automated tester skillset
5) largely autonomous automation team
6) 30K+ manual regression tests
7) automation for regression testing only (expense activity)
8) waterfall development methodology
9) bad requirements (and even worse manual tests) as input
10) client-server and web-based applications
11) apps developed using a variety of languages and technologies
12) many third-party controls
13) oracle backend that could not be accessed directly
That is a very incomplete list of "circumstances". Truly, there are many, many other things that needed to be considered before determining the best automation "approach". Of course, I could continue and describe the "approach", but unless your circumstances exactly (or mostly) match the one above, it probably won't be of much help.
That said, I am more than willing to give my opinion on an "approach" given a "circumstance".
@Kate: "Wicked problem" - Interesting. Thanks for turning me onto something new.
I am still relatively unclear about what your definition of framework is.
It appears not to be a method or means to get to the end, as many people have described how they do testing.
I have worked for six companies in the capacity of arranging software test automation, and in each company we did it differently.
To me, none of these scenarios falls into your descriptions and ideas of what a framework is.
So what is a framework?
There are at least 100s of ways to conquer automation for various projects - tools sometimes dictate how we do things, and lack of tools might dictate how we do things differently.
In my experience also, I have never worked in a company really where we had one group designing a system or API for another group to write tests on - I suppose that I've written libraries that we all have used later to write our automation, but we've all always written code. No simplified framework such as one that takes an XML script has ever worked very well.
Are libraries what you mean by a framework?
Isn't this still like Selenium, or Watir, or QTP, just another layer of abstraction more specific to your company's needs for automation?
A lot of folks when they first get into automating tests they will use a tool like Selenium, Watir, or QTP and write their test cases. They may end up with hundreds of these. Usually this first pass will not modularize anything so if you have to log into the app every test case has all of the steps to log-in. Of course, this approach is the most fragile. One change in the log-in process and all of the tests fail. Unless you are manually running these tests the "framework" would be whatever is setup to select which tests to run and then, at the proper time, start up the tool (Selenium or whatever) and tell the tool to execute the test case.
So the next step is to break out functions liek the log-in into it's own test that accepts arguments for the log-in criteria and all of you test cases call this test function. Now, if the log-in process changes, you fix one test function and all of your test cases pass. This is functional decomposition. As you expand this concept you have all of these test functions where each does one thing. Some may require data passed to them and some don't.
At the very lowest level these test functions could be specifying what you can do to an object on a given screen. For instance for a button you could have functions like Click, ValidateEnabled, ValidateText. Now the test case doesn't need to know how to read the text label off the button so if one developer has it on the 'caption' property but another has it on a 'text' property it won't matter.
Now your test cases call all of these test functions.
How this usually works is you will have written a "driver" that could be written in the language of the tool or it could be written in some scripting language. In any case it reads your test cases, pulls the data from the data source specified, and passes it to the test function, then then handles pass/fail reporting. This driver and all of the support (data storage, error reporting, etc) constitute the "framework".
Your Desktop Educational Software sounds like an example except that the test functions were part of the application itself and you had an external Java Robot calling a list APIs to perform all of the internal tests for this test case.
In some cases where you have some folks writing these test functions and other non-automation testers building and executing the test cases. They need a way to know what functions are available, what data may need to be passed in, etc. Folks have solved this a number of different ways from creating wiki pages listing the functions to building a front end tied to a database of all of the available functions that will build/edit the test case as the user selects what they want to do. The later is very expensive but if you have an app with hundreds of screens (or API interfaces) that might be the way to quickly be able to build the thousands of test cases that would be needed.
Thanks for clarifying Allen.
That explains what I was asking for very well.
And regarding documentation, in the past I have tried a few methods of documenting APIs, and the most valuable method for me in most cases has been to use documentation tags - .NET has doc tags, and Java uses a superset of ScriptDoc tags, then I generate API documentation using a tool.
JavaDocs and MSDN-like docs can be updated at build time automatically and placed in a central location.
I have also use ScriptDoc with PHP, and I suppose there are similar documentation standards for other programming languages.
This is exactly the evolution test automation ran through at my previous employer. When I started there, the first iteration of the driver with data source for defined tests was being implemented - using CSV files with defined structure.
That evolved into two relational databases maintained in CSV files, one for the test drivers and one for the application data they used. The framework itself was remarkably tolerant of application changes, but poorly documented and rather opaque to new users. Since there was never enough time to maintain the script framework, you can imagine what happened to documenting the framework.
The method used there was to use batch files to copy the test driver data set into the expected locations before starting the automation tool. A machinename.ini file in a set location in the directory structure defined the test suite the data set ran for, as well as other machine-specific options (we were running more than a dozen script systems at this point) would be read into the tool once it started, then the test run file would be read in and the fun began.
The structure looked a bit like this (from memory, since I haven't worked there in 9 months):
1, "Run the application under test and logon as the sysadmin user"
2, "Do basic configuration for the system"
LineNumber, TestNumber, TestType, DataID, Description
1, 1, 1, 1, "Run the application"
2, 1, 2, 1, "Logon as user XXX"
3, 2, 3, 1, "Perform config option 1 - turn on database synch"
4, 2, 3, 2, "perform config option 2 - set the application printer to use the windows default"
and so forth... The combination of test type and data ID determined which CSV file and line number would be accessed to read details for the test step - in the script code there was a case statement which would pass control to the appropriate routine for the test type, and that routine would read the data it needed.
The whole system worked very well - it was testing a suite of applications with over 3 million lines of code (by now it's probably topped 4 million) and several hundred configuration flags. The biggest problem with it was that it was extremely opaque to new users, and no-one had the time to document it.