Software Test Scripting
This essay explores the test scripting in terms of software development as the two processes are very similar and share many of the same techniques and pitfalls. It is primarily aimed at manual test script construction because automated test script construction is software development.
An exploration of, and notes on, the test scripting process:
- Introduction
- Definitions
- The development life Cycle
- Requirements
- Design
- Executable Models
- Iterations in Test Scripts
- Use of the Design
- Recommended Further Reading
Introduction
This text will explore the process of test scripting. It will do this through analogy to the development process and will try to show some of the things that testers can learn by studying development methodologies and best practises. The essay is not complete description of test scripting, it is an introduction to test scripting and to the study of development techniques but if you only take one thing from this essay take this:
Test script development involves the same processes and techniques used when constructing software programs, any experience that testers have had in the past when scripting is an experience which has been shared by the development teams when developing. Testers and developers are more alike than different. Testers should study developers and their techniques; developers should study testers and their techniques. We are all software engineers, we just have different areas of specialisation, but it is foolish to specialise without knowing the general techniques of your discipline.
Definitions
A test script is the executable form of a test. It defines the set of actions to carry out in order to conduct a test and it defines the expected outcomes and results that are used to identify any deviance in the actual behaviour of the program from the logical behaviour in the script (errors during the course of that test). In essence it is a program written for a human computer (tester) to execute.
Testing uses a lot of terminology. In this text I will use the following definitions:
- Test case:
- a logical description of a test. It details the purpose of the test and the derivation audit trail.
- Test Script:
- the physical, executable, description of the test case.
- Automated test script
- a program that implements a test.
The development life cycle has a number of processes and tasks that the development community is involved in:
- Requirements
- Design
- Coding
- Testing
Testers are familiar with each of these stages in the context of system development and its relationship to the construction of tests. However a Test Script is a program and as such it has a life cycle that parallels that of the system development in microcosm.
The development life Cycle
Requirements
Fortunately for testers, tests are derived before building scripts. The test description itself should contain the requirements for the test script. (The process of test case construction and the corresponding requirements analysis techniques are outside the scope of this text.)
Design
Test Script design involves the construction of an executable model which represents the usage of a system. It is an executable model because the model contains enough information to allow the tester to work through the model and at any point unambiguously knows what they can do next.
Executable Models
Executable models use 3 main constructs:
- Sequence, one action after another.
- Selection, a choice between one or more actions
- Iteration, a repeated sequence or selection
Sequence:
- The model consists of three main stages done one after the other; initialise, Body, and Terminate.
Selection:
- The model consists of a selection between ‘Action 1’ or ‘Action 2’ or ‘Action 3’
Iteration:
- The model will iterate while condition C1 is satisfied.
Or, representing the above diagram as a graph:
The graph provides us with the following two Meta paths:
- Initialize [Body (Action 1 | Action 2 | Action 3)]* Body Terminate
- Initialize Body Terminate
A test script is an interesting executable model as it only embodies the sequence construct. Leading to the familiar situation where testers write numerous scripts around the same area of the program, each script differing slightly from the one before:
- Script 1: Initialize, Action 2, Action 1, Terminate
- Script 2: Initialize, Action 1, Action 2, Action 2, Terminate
- Script 3: Initialize, Terminate
It is obvious that this situation occurs because each test script is an instantiation of one of the script model’s Meta paths, each script is a single sensitised model path.
Test Scripts avoid the concepts of selection and non-deterministic iteration because each test script should be run in exactly the same way each time it is run in order to aid repeatability. The tester is given no choice when executing a test script but to follow it exactly and consistently, this allows errors, once identified, to be demonstrated repeatedly and aids the correction of errors because the exact circumstances surrounding the execution of that test were specified.
Computer Programs do not avoid selection and iteration therefore the scope of a computer program is larger than a single script and a number of scripts will be required to cover the scope of the selections and iterations modelled in the program. A computer program does not represent the instantiations of the model paths; a computer program provides an alternative executable model.
Iterations in Test Scripts
Having pointed out that test scripts do not use iteration constructs it is worth realising that in the real world testers do write test scripts using iteration constructs and then examining why.
One of the software development tenets is the avoidance of repeated code. This aids maintainability, can often aid readability and allows re-use, which increases the speed of construction of similar procedures.
Iteration constructs are used when constructing test scripts for the same reasons.
It is perhaps unfortunate that the test tools which testers use, particularly when constructing manual scripts, do not make re-use or iteration simple. This leads to a more informal implementation than would be found in program code:
- repeat steps (3-12) 4 times, but this time enter the details for Joe Bloggs, Mary Smith, John Bland and Michael No-one.
- Press the enter key 6 times.
Example 1 above uses iteration to avoid repeating the same set of steps in the script. However, the same steps (3-12) will typically be repeated in other scripts so re-use isn’t facilitated. This is primarily because most tools which testers use for manual testing will not support sub-scripts or procedures.
When testers do use loops it is obvious from the Meta model path description of the test script what they are doing. The tester will have identified a particular instance of the Meta model path and in order to increase maintainability of the script the tester finds it more appropriate to use a higher order description as the actual test script itself.
However, the test script must not implement a non-deterministic loop or an ambiguous selection condition otherwise the test script, will not be implementing an instantiation of a Meta path, it will be implementing an actual Meta path.
Use of the Design
A design that represents the flow of control that scripts can select particular paths from serves a number of purposes:
- Oracle: expected results can be predicted.
- Coverage: can be assessed.
If no design is produced then testers will have to assess coverage by examining the discrete set of tests and identify missing paths or actions not taken. Typically missing paths won’t be noticed until the tests have been executed a number of times and the tester has built up a model in their head and realises that they have never executed Action 3.
- Automatic Transformation from Design to Script.
This obviously depends upon the design technique used and the availability of tool support. Testing models, particularly for scripting, can use models that do allow automatic code generation: Jackson, Flow Charts, State Transition diagrams. Typically tools exist to draw the models, and tools exist which can take the models and produce entire programs. Test scripting requires tools that can produce programs of the chosen path through the model. Testing may have to design and build their own tool to support this.
In practise the relationship between the design model and the test scripts involves less interpretation than the relationship between a software design model and the software program, therefore the test script design model must be maintained before any maintenance is carried out on the derived test scripts.
Coding
The coding of a test script refers to the writing of a test script.
Each test script should follow the path identified from the design and as such should be fairly easy to construct if a design has been produced.
Test Scripts are typically represented by a series of steps, each step being given an id or sequence number, an action and a result.
Some test scripts will be represented with columns for pass/fail attributes during execution. This is not actually an attribute of the test script but is an attribute of the specific execution instantiation of a test script, but given the crude nature of the tool support in testing it is often easier to add the column to the script.
Artfully Vague
-
- *Example: a script to save the current file in a word processor which has been saved before:
Step 1
- Action - Click on the file option of the main menu bar.
- Result - A drop down menu appears, one of the options is ‘Save’
Step 2
-
Action - Click on the ‘Save’ option of the drop down menu.
-
Result - The disk whirs and the date/time stamp of the file in explorer matches the time that the file was saved.
-
-
*Test Scripts are often artfully vague. As can be seen from the example above. The script is written in English and a number of presuppositions are embodied, i.e. that the tester knows:
-
What a main menu bar is,
-
What a drop down menu is,
-
That ‘click’ means to maneuver the mouse pointer over the text and then click,
-
What explorer means,
-
How to check the date/time stamp in explorer.
-
When writing a test script the writer must take in to account the level of knowledge that the tester (the person executing the test) will have. If there is not enough information then the tester may not be able to run the test, or they may run the test but, having misunderstood some of the actions, have actually run an entirely different test which may lead to a false positive or false negative result being reported.
There are also time pressures that affect the writing of scripts. Unlike a computer program a script can be poorly written but can still execute provided the human computer has the correct knowledge.
The above script could be written as: “save the file using the file menu and then check the date”. This is possible when the person executing the script is the same person writing it. It doesn’t aid re-use, repeatability or maintainability but it can still be executed. A computer program cannot do this, a computer program can skimp on the documentation and the computer can still execute the program, but the computer has no more information than that presented in the program and the instructions presented to the computer cannot be artfully vague.
Other concerns
Development is rightly concerned about maintenance and ensuring that their code makes sense now and will make sense 18 months in the future when they have to update it.
Testing should have it easier as the transition from design model to test script should be an automated process but typically testers don’t have an automated mechanism for doing this and end up doing the translation manually.
It can be a lot of work to document each individual test script to a precise level.
Testers probably write as much source as the development teams and yet have fewer tools to support the development and maintenance process.
Testing
Testers are aware of the importance of testing software. They should also be aware of the importance of testing their test ware.
The process of constructing tests and executing them should give testers an appreciation of the difficulties of program construction. Defects slip into test scripts as often as they slip into programs. This should make testers more sympathetic towards the trials and tribulations of their development peers but for some reason, some testers do look at systems with disgust and wonder what on earth the developer could have been thinking to allow such an obvious defect to slip through. As a corollary, developers know how hard it is to program and know how easy it is to slip defects in to systems, but are often scornful towards the tester who has a bug in their test script.
We can blame these attitudes on human nature but they are also symptomatic of a competitive environment where the test team and the development team feel that they are in opposition with one another. Both sides have the same problems and each can try to learn from the other.
Testing a test script can be tricky:
- Test scripts are often constructed when there is no system available. At this time the quality control techniques are involved in checking the design model used and double-checking the mapping of the script back to the model.
- Test scripts are constructed to test a new version of the system but the only system that is available is the old buggy version. Again the script must be validated against the design model, but it may also be possible to execute portions of the script against the old version of the system.
- Testing the script with desired version of the system is the most important of the situations but it does not refer to the execution of the script. It refers to the validation of the design model against the system, in essence testing the Meta paths identified by the model.
Testers are often under time pressure, and when under time pressure errors can creep in far more quickly. This is why the design model must be as accurate and thorough as possible.
The challenge of expected results
There is an interesting challenge set by the writing of test scripts, that of expected results.
A model of system usage upon which the construction of a test script is built may not always model the steps which have to be taken in order to check the pass/fail of an expected result.
The system usage model may tell the user how to create a customer but it may not tell the user how to check that a customer has been added to the system correctly. This may have to be done via an SQL query on a database. But this information must be present in the test script in order to execute the script and determine its pass/fail status.
This suggests that the construction of test scripts is not done through only one model. There is at least one other model available which describes the conditions of the test and how to validate the successful implementation of those conditions.
The challenge to the tester is in the integration of these two models; the condition model, and the system usage model, into a single test script.
Current testing best practice involves the construction of test conditions; these are typically developed from an analysis of specification documentation. (A thorough discussion of test conditions is beyond the scope of this text.)
There would appear to be no standard usage of test conditions:
- Test conditions are used to define the domain scope of tests: customer type = “Male”, currency = “USD”
- Test conditions are used to document ’things’ that testing must concentrate on: “The system must allow the creation of customers”, “The system must allow the deletion of customers from the system when they have no active transactions”.
No doubt there are other uses of test conditions that I have not been exposed to.
Current testing tools, if they support test conditions at all, tend to model test conditions as a textual description. These textual descriptions are then cross-referenced to test cases in order to determine coverage of the fundamental features of the system analysed from the specification documents. There is no support in the tools to link the conditions to the scripts or script models. There is also no support for the modeling of the steps taken to validate those conditions that require validation i.e. the 2nd usage given above.
The test script construction process is analogous to the inversion, correspondence and structure clash processes presented so long ago by Jackson Structured Programming and more typically the informal mapping from design and specification to program code that developers do on a routine basis.
There are techniques to be learnt from these processes and tool support is required.
In the absence of tool support the software testers must document these models as effectively as possible.
This has the effect of expanding our modeling of test conditions to including instructions on how to validate that those conditions have been satisfied.
These conditions then have to be cross-referenced to the elements of the script design model at the points where those conditions would be satisfied. The conditions also have to be cross-referenced to the test cases so that the correct condition validation instructions are performed in the correct test scripts.
Conclusion
Test scripting is a time consuming, error prone and difficult process.
There is much that the tester learns from experience, many of these experiences are shared by, and have been documented by, the software development teams already.
Models are important in testing. They form the basis for all aspects of the testing process. It should be appreciated that the models used to derive tests are different from those used to construct test scripts and that in order to construct scripts effectively the overlap and intersection of these models must be identified and controlled.
Recommended Further Reading
This is a small section listing recommended reading for some of the development activities discussed in this text.
The Pragmatic Programmer, Andrew Hunt and David Thomas, Addison-Wesley, 2000
- This is a set of examples, discussions and stories which illustrate the problems and best practice solutions associated with the development process.
Software Requirements & Specifications, Michael Jackson, Addison-Wesley, 1995
- This is another set of small essays each of which provides insight and triggers contemplation of the various aspects of software development. The discussions of problem frames are particularly relevant.
Any programming manual for any programming language
- It is important to attempt to learn a programming language, even at a rudimentary level, in order to appreciate the difficulties of software construction and the knowledge that is ready to be assimilated into your testing.