Traditionally, scripted automation has been used to run checks to verify established application functionality (e.g. for regression checks). An alternative usage for automated scripts is to assist in executing the overall test workflow. Two methods of accomplishing this are presented here:
- Performing smoke tests
- Creating complex configurations to support test sessions
Smoke Tests
Smoke tests are a special type of test that does not belong in the category of regression testing. Regression tests are intended to thoroughly verify functions at a broad spectrum of interface points. The smoke test is intended to perform a specific function: to provide a minimum gateway for allowing development builds into the QA environment.
The smoke test performs a quick check of the overall application "happy paths" to identify major functional failures. Here a "major" failure is defined as one that prevents the testing of a significant portion of the application. Unlike regression checks, the smoke test is not intended to thoroughly verify any particular function. In fact, if properly designed, it should not be susceptible to minor failures at all. Instead, it should interact with a minimum of interface objects to limit the likelihood of a minor failure.
In addition, the time box limitation of a smoke test puts an emphasis on getting "the biggest bang for the buck". The smoke test should be continually tweaked to include as many major functions as possible and still complete within the designed time limit (typically 30 minutes to 1 hour), especially for GUI test automation.
Creating Complex Configurations to Support Test Sessions
When considered as a workflow framework, the scripted automation takes on the role of performing a set of tasks as opposed to verifying the functionality of the application. The size and complexity of the automated scripts can be critical. This is due to the fact that the likelihood of a critical stoppage grows exponentially with the size of the script. For example, a critical stoppage early in the processing of a large script would impact the entire flow. Dividing the overall test workflow into ten separate scripts, may limit the impact of a critical stoppage in one of the scripts to 10% of the overall testing.
The size and placement of the scripts in the overall test process should be balanced between usability, run time, maintenance, and the frequency of script stoppages. In general, the scripted automation should be targeted for tests that have very complicated setup procedures or involve a large amount of redundant setup steps that would be a large burden on the tester if performed manually.
These are just two example of using scripted automation to support test workflow. If implemented properly, the injection of small, well designed scripts into the test process can provide a significant improvement in the overall test quality.