Test Script Maintenance

Automation Testing and script maintenance :


Debug Scripts Incrementally :
Recorded test scripts, like other software development efforts, can become quite large. Hundreds of lines of codes will need debugging for successful playback, which might contain several sets of data for parameterized data driven test scripts. The common approach to debugging a test script is to first record all the business processes and requirements, then have the tester play back the test script to identify and correct any problems. The tester continues to debug the test script until it successfully plays back with a single set of data and/or multiple sets of data.


Debugging and troubleshooting test scripts becomes extremely tedious and intractable when the test script has hundreds of lines of code, verification points, branching logic, error handling, parameters, and data correlation among various recorded business processes. A much more manageable approach to debugging complex and lengthy test scripts is to record portions of the script and debug these portions individually before recording other parts of the test script. After testing individual portions, you can determine how one portion of the test script works with another portion and how data flows from one recorded process to the other. After all sections for a test script have been recorded, one can playback the entire test script and ensure that it properly plays back from the beginning to the end with one or more sets of data.


As an example, I recorded and automated a complex test script that performed the following business processes:
1. Check the inventory in the warehouse,
2. carry out an MRP run,
3. replenish inventory,
4. pick items for a delivery and process the delivery,
5. confirm the orders were transferred for the delivery, and
6. verify the delivery items arrived at their destination.


This test script had several lines of codes, parameters, verification points, and data correlation that needed to work as a cohesive unit. First I recorded each individual process and verified they could successfully playback individually. Then I integrated all the recorded processes into a large test script and verified it could playback successfully with multiple sets of data. As previously stated, a key objective is to ensure that each recorded process plays back successfully before proceeding to record the remaining portions of the entire test script. I did not record all the processes mentioned (1 through 6) and stringed them together for playback without first verifying that all the processes could playback successfully as individual processes.


The lesson here is to avoid waiting to debug the script until the entire test script is recorded.


Test Script Synchronization
Test tools can play back recorded test script at rates much faster than an end-user's manual keystrokes. Subsequently this can overwhelm the application under test since the application might not display data or retrieve values from the database fast enough to allow proper test script playback. When the test application cannot respond to the test script, the script execution can terminate abruptly thus requiring user intervention. In order to synchronize applications under test and test scripts during playback, the testing team introduces artificial wait times within the recorded test scripts. Wait times embedded in the test script meant to slow down the test script execution are at best arbitrary and estimated through trial and error. The main problem with wait times is that they either wait too long or not long enough.


For instance, the tester might notice the test script is playing back too fast for the application under test. He might decide to slow it down several times until the test script execution is synchronized with the test application. This technique can backfire--even fail--if the application is running slower than the newly introduced wait times during test execution due to external factors such as the LAN having delays or system maintenance. In this scenario, the tester would have to continually guess a new reasonable wait time--each time. Slowing down a script with wait times is not very scientific and does not contribute to the creation of a robust automated test script that playbacks successfully without user intervention.


If possible, testers should avoid introducing artificial wait times or arbitrary sleep variables to synchronize test scripts with the application.


"While" statements or nested "loops" are appropriate techniques used to synchronize test scripts that require synchronization points and will playback successfully regardless of the response times of the application under test. Inserting "nested" loops or "while" statements within a test script also reduces user intervention during the test script playback. For example, I insert "while" statements in recorded test scripts that continually press the Enter button until a scheduling agreement is created no matter how long the application under test takes to generate the agreement. The test script works independently of the response times for the application under test.


Signed-off, Peer Reviewed
As part of the test readiness review criteria, test scripts should be formally accepted and approved prior to starting the test cycle. SMEs, business analysts, and developers should be involved in approving recorded test scripts. The tester writing the automated test script should demonstrate that the test script successfully plays back in the QA environment and, if possible, with various sets of data.

Recording, Playing Back Against Hidden Objects
Scripts might be recorded to populate or double click values for a field within a table grid or an array where the location of this field is not fixed. If the field's location within a table grid or array changes from the time it was recorded, the script might fail during playback. Test scripts often fail during playback because the location of objects that are not displayed or visible within the screen have changed.


In order to playback scripts that are location sensitive or where the location is subject to change, it might be necessary to enhance the script with functionality such as "scroll down," "next page," or "find." Including such utilities ensure that hidden objects requiring playback will be identified, populated, and/or double-clicked regardless of their location within an array, table grid, or the displayed screen.


As an example, I once recorded a script where I scrolled down twice during the initial recording to find an empty field where data could be entered within a table grid. When I played back the script a few weeks later, I had to scroll down four times to find an empty field instead of twice as previously recorded. Consequently the script failed, so I embedded logic in the script that instructs the script to scroll down as many times as necessary to find an empty field. I did this by placing the "next page" function in a "while" loop, which caused the script to "page down" until an empty field was found.


Schedule Recurring Scripts/Store Execution Logs
To circumvent the limitation of test tools not capable of scheduling test scripts on a recurring basis, one can schedule test script via the NT scheduler which supports various command line options. Test scripts should have execution logs stored in a shared drive or within test management tools for test results that are subject to audits.


Create Automatic Notification for Critical Scripts
Test scripts can be enhanced with error-handling programming logic that instantly sends error messages to a wireless device or an email address when problems occur. Some test scripts are business critical and might run as batch jobs in the middle of the night. The proper and successful execution of these business critical test scripts can serve as a dependency or pre-condition for other automated tasks.


Always include logic in business critical test scripts that automatically sends notification in the event of a failure.


Documentation
To make test scripts reusable and easier to maintain, document all relevant information for executing the test script, a test script header, and any special conditions for execution of the test script. For example:
1. Adjust dates within the application under test for closing of the books,
2. Update any fields that require unique data,
3. display settings for context sensitive/analog/bitmap recording,
4. list other test scripts that are dependencies,
5. specify necessary authorization levels or user roles for executing the script,
6. conditions under which the script can fail and work around for re-launching the script,
7. applications that need to be either opened or closed during the script execution, and
8. specific data formats, for instance, European date format versus US date format, etc.


Furthermore, scripts should contain a header with a description (i.e. what it is used for) and its particular purpose (i.e. regression testing). The script header should also include the script author and owner, creation and modification date, requirement identifiers that the script traces back to, the business area that the script supports, and number of variables and parameters for the script. Providing this information in the test script header facilitates the execution, modification, and maintenance of the script for future testing efforts.


Perform Version Control on Test Scripts
Many corporations spend tens of thousands of dollars for test tools, but ignore the by-product of the test tool -- namely the recorded test script. For companies building libraries and repositories of automated test scripts, it is highly advisable to perform version control on automated test scripts. Version control helps to track changes made to the test scripts, and to maintain multiple versions of the same test script.


Adhere to Test Script Naming Standards and Storage
Test scripts should follow the project's accepted naming standards and should be stored in the designated repository such as a shared drive or test management tool.


The test manager should designate naming standards for the test scripts that include information for these areas:
1. Project name (i.e. GSI which stands for Global SAP Implementation),
2. release number (i.e. version or release number that will be released/deployed,
3. subject area or test category (i.e. SC for Security Testing, LT for Load Testing),
4. sequential test case number, and
5. title or function that will be tested (i.e. procurement from external vendors).


Following these tips enable testers to build more robust test scripts for their organizations. Also, developing maintainable test scripts maximizes the benefits of automated test tools. Companies can realize ROI from automated test tools when automated test scripts are used for future testing efforts, reducing the time needed to complete a testing cycle. The techniques above will help companies build test scripts that will meet these objectives.

Post a Comment

Previous Post Next Post