An application test plan should contain a minimum set of optimized test cases with maximum test coverage of all critical application functions. It should be executed using a tool that easily adapts to changing data and requirements.
In order to create an effective automation test, it is first necessary to review the application test plan provided by the application owner to evaluate its suitability for automation. You don’t want to automate a “BAD” test case. Consider the exact intent of the test plan and determine if you can create an effective test case (same coverage) more simply with more reliable automation. It is not acceptable to simply write all the automation scripts directly from the manual test plans. This has the same inherent limitation as doing record/replay for every script: the test is unreliable.
Design test cases and test scripts to be modular. Instead of using one script to perform multiple functions, break the script into separate functions.
Design test cases and test scripts to be generic in terms of process and repeatable in terms of data. Read test data from a separate source: keep the scripts free of test data so that when you do have to change the process or the data, you only have to maintain one item in one place.
Consider the goal of the test
Don’t just blindly follow a manual test plan. See if there is a simple way to accomplish the objective stated in the test plan.
Focus on modularity and reusability. Create a set of evaluation criteria for functions to be considered when using the automated test tool. These criteria may include:
- Repeatability of tests
- Criticality/Risk of applications
- Simplicity of operation
- Ease of automation
- Level of documentation of the function (requirements, specifications, etc.)
Targeting Test Plans for Automation
A test plan representing a good candidate for automation would have the following characteristics:
- Contains a repeatable sequence of actions
- The sequence of actions is repeated many times
- It is technically feasible to automate the sequence of actions (tool is capable, no external hardware actions)
- The behavior of the application under test is the same with automation as without
- Testing involves non-UI aspects of the application (almost all non-UI functions can and should be executed using automated tests)
- The same tests must be run on multiple hardware configurations
- The same tests must be run with varied combinations of other applications to verify compatibility (i.e., Interoperability Testing)