QUALITY deals with the issues of the quality standards to be applied to the testing plan and not to the software being tested. The plan gives the framework for how the system will be evaluated and under what circumstances it will be released.
The 6. Approach outlines the testing process to be applied and can be considered to have six steps as illustrated in the diagram below. The core four steps are: Develop Tests, Prepare to Test, Run Tests and Review Test Results. These four steps are controlled by Plan Testing and Change Management.
This involves a number of activities shown in this diagram in order to develop the formal tests required to run any form of testing:
- Analyse Requirements - Means understanding them and looking for what is missing and inconsistent from what is actually requires.
- Develop Scenarios - Means the preparation of scenarios that use the system, using techniques such as Use Case.
- Derive Acceptance Criteria - Once the previous two activities are underway or completed then the set of questions to be asked about the system to see if it matches the capability needed are prepared.
- Construct Test Cases - Test Cases are the set of specific inputs and expected results which enable one or more Acceptance Criteria to be proved.
- Write Test Scripts - Test scripts are the operational instructions for running Test Cases including: what has to be done to the system, what has to be measured, and how to do the measurement.
- Review Documents - This is a key quality process of checking all documentation produced during the development of the system.
Prepare To Test
These are the activities other than developing the tests that are required to allow testing to take place:
- Preparing the environment to run the tests - Making sure that the people, processes, hardware, software etc. are all in place to enable the testing to take place.
- Preparing Test Data - Building the data files that are required to run the test cases.
- Creating three key documents:
- Entry Criteria - What must be done before testing starts.
- Test Procedure - The instructions about how to run the tests on the test system including the Test Scripts.
- Test Item Transmittal Report - The instructions from the developers about the system version released for testing.
Run Tests is about running the tests and recording the results:
- Running the tests involves using the input and expected results from the Test Cases and applying the Test Scripts and other elements of the Test Procedure to run them.
- Recording the results involves recording in the Test Log the activities that were done in what order, and the events that happened when the test was run. Any that have actual results that differ from the expected results have the information recorded in an Incident Report. The Incident Severity is also decided at this point.
Review Test Results
When the tests have been completed then the acceptability of the system is assessed. A simple method is to check how many outstanding Incidents there are and their severity. However this is not sufficient as a simple count of Incidents does not give any idea about their impact on what the organisation wants to achieve with the system. A flawed system which delivers capability to an organisation is much better than a perfect system that does not. Therefore the test results need to be checked and traced to see what effect they have on:
- Requirements and their
- Business or System Impact.
This analysis enables a balanced decision to be made about whether system passes these particular tests and to make recommendations about its use. The results of all this activity are then recorded in a Test Summary Report.
There are two processes controlling these other four. The first is Plan Testing. This is where the scope, timescale, resources, quality and risk are decided in advance, and kept up to date throughout the UAT, including Entry Criteria and Exit Criteria.
The other controlling process is Change Management where an impact analysis is made for any changes for their effect on the system. As it gets nearer the stage of starting testing, or even during it, then changes can have a major influence on how effective the testing will be.
Other Articles on QUALITY Sections
- 2. Introduction gives an overview of the QUALITY process.
- 7. Item Pass/Fail Criteria gives an overview of how to evaluate the testing data to make a pass/fail decision.
- 9. Test Deliverables describes what is produced by the testing process.
- Basics of Project Plans gives further information on item 2:"Long or Unrealistic Timescales" and item 4:"Scope Creep" as two of the five key characteristics of a project plan and the importance of balancing them.
- Verification and Validation has further information on item 6:"Poor Testing" by explaining what the terms Verification and Validation mean.