Home Page

SDLC Course Outline

Manage Project

Define Project

Design System Architecture

Design Components

Develop Components

Integrate System

Deploy System

Revise System

What's New

Contact Us

Favorite Links

About Us

CHAPTER 5

TESTING

5.0 Introduction

Testing in the stage where the validation of all system components occurs. testing is the basis for accepting a system, and its completion represents one of the more significant developmental milestones. Testing at any level may result in the re-design of a portion of the system.

The extent of testing required will vary considerably with the size of the system, but it remains as one of the most critical points during the system life cycle, and one that often causes the most trouble and lost time. It must be a constant consideration throughout all prior stages and their relevant activities. Listed below are the activity products which serve as background information to the testing activities:

· Subsystem design documentation

· Subsystem and programs design documentation

· Debugged programs and procedures

It is advantageous to think of testing as falling into three major categories:

· Manual procedures and facilities testing

· Subsystem/system testing

· Acceptance testing

Initially, the designers have responsibility for manual procedures and facilities testing. The supervisor charged with implementation and conversion responsibilities will assume prime responsibility during the last phase of testing.

Program debugging is a primarily the responsibility of the programmer who wrote the program and was accomplished in the previous stage.

Subsystem and system tests are the responsibility of the designers and a specialized team, although programmers may execute the tests under their supervision. It is the designers who create the system logic, controlling information flow between programs and subsystems. The designers also know the objectives in detail, towards which the system design was directed.

Acceptance testing is a joint responsibility of the testing team and of user representatives. Acceptance testing is critical, as any failures subsequent to this activity will have a major impact on the user organization.

All of the testing categories - Manual procedures and facilities, Systems/subsystems, and Acceptance - are providing permanent materials and tools for continued system maintenance and enhancement. It is essential that this work be viewed as the creation of a permanent means of testing throughout the system life and not as a set of temporary disposable materials and tools.

The standard activities for this task are to:

· Develop detailed test plan and procedures (5.1)

· Prepare site and install hardware and facilities (5.2)

· Develop job stream and job control (5.3)

· Test training course, work aids, and human procedures (5.4)

· Build test data base and transaction files (5.5)

· Test subsystem/system (5.6)

· Perform acceptance test (5.7)

· Produce test results report (5.8)

The only activity which could be considered as being optional is that concerning site preparation and hardware installation, or that concerning testing training course, work aids, and human procedures. These are not always carried out due to the nature of the system or acceptance of hardware and site as constraints.

The specific products of the testing standard activities are:

· Detailed test plan

· Operations testing procedures

· Delivered and installed equipment and facilities (report)

· Hardware, software, and physical facilities checkout (report)

· Object run job control (listed)

· Load modules

· Tested training courses (report)

· Tested on-the-job training programs (report)

· Approved work aids (report)

· Tested aids report

· Tested manual procedures (report)

· Training schedule for conversion and implementation

· Test data base (documented)

· Test transaction files (documented)

· Tested job stream or transaction paths (report)

· Transaction path analysis

· Tested subsystem (report)

· Tested system (report)

· List of acceptance test discrepancies

· Accept/reject system (report)

· Testing stage results report

During testing, three cycles of documentation will take place. The first is the establish what is planned in detail, which takes the form of a test plan. The next is to record the results of each activity. The third is to summarize the testing stage as a whole, citing overall methods, results, and conformance to the test plan. The documentation itself will reflect the progressive, accumulative approach to testing.

 

Exhibit 5.0-A. Suggested Testing Activity network.

 

5.4

 

5.1 5.3 5.5 5.6 5.7 5.8

 

 

 

5.2

 

5.1 Develop detailed test plan and procedures

5.2 Prepare site and installation hardware and facilities

5.3 Determine run time environment

5.4 Test training course, work aids, and human procedures

5.5 Build test data base and transaction files

5.6 Test system/subsystem

5.7 Perform acceptance test

5.8 Produce test results report

 

 

5.1 Develop Detailed Test Plan and Procedures

While extensive planning for testing was accomplished in Detail Design, more detailed planning will be required at this point by those responsible for conducting the tests. A detailed test plan is prepared for testing as a whole, and for each identifiable test which will take place. For the overall test plan, the information will be summary in nature while, for the specific test, it will be as detailed as possible. Certain parts will remain relatively unchanged, whereas other parts, such as schedules, may have to be drastically revised as testing proceeds. The results of each testing activity will be evaluated relative to this detailed test plan.

In addition, indoctrination is required for the computer room and associated personnel as to what the system plans expect of them. Standard procedures when a program halts, such as "always dump core" or "always write out register contents: must be presented.

This activity should be executed prior to the first test run being sent to the computer center. The absence of sufficient information form the computer operator, or a test package lost in scheduling, can nullify a perfectly good test shot and cause a significant delay in the completion of the system. This activity should result in a smooth relationship between operations and software personnel.

5.1.1. Methods

1. Project impacts. Cite any unusual impact which the test or operation of the system will impose on equipment, personnel, operating procedures, or on the remainder of testing. The purpose is to highlight where potential problems may arise. Typical examples are requirements for special equipment, long test running times, need for unusually qualified personnel to be available, and so forth.

2. Software. Make a list of software packages used during, and in support of, the test which are not part of the system under test. This list should be annotated to explain why these packages are being used and record any special requirements they may generate. Some examples are special dump programs, utilities to extract selected records from files, data reduction used to analyze test data, test data generating programs, and so forth.

3. Test site. Give location(s), date(s), and participating organizations. For many tests, this will be at one standard site with no special requirements. However, testing may betaking place at several sites and, especially if new equipment is being used, the site may be controlled by the company. In most cases, the final acceptance tests must be run on the equipment and at the site where the operational system will function.

4. Test milestone chart. Prepare this chart to depict testing activities in chronological order with supporting narrative as required. For an individual series of tests, the starting and ending dates will be highlighted. When the chart covers many interrelated tests, the dependencies will be shown.

It is recommended that PERT type diagrams be prepared for each level of testing. A PERT or similar network is a useful graphic device for illustrating dependencies between tests and of preparing effective test schedules. The impact of slippage of a given set of tests can be more readily evaluated and resources allocated so to t keep the teasing program as a whole on schedule.

5. Personnel requirements. Provide a horizontal bar type chart to depict the number and period of use of each skill type requirements such as multishift operations, and assignment or retention of key skills to ensure continuity and consistency in extensive test programs. Where unique individuals are required, their availability must be determined. Particularly in the later stages of testing it may be necessary to have access to personnel who designed the system but are no longer employed full-time on the project.

6. Equipment requirements. Provide a horizontal bar type chart to depict the period of usage and quantity required of each item of equipment employed throughout the test period, including test data reduction equipment.

7. Applicable documents. Itemize by document title, number and use the final documentation produced for the project that is applicable to this test. Include any other documentation for describing systems or procedures which supplement, or provide for, interaction with the subject during the course of normal operation or at any point in the test phase.

8. Test materials. Itemize the articles and apparatus associated with the conduct of the test(s). All items not deliverable as a part of the operational system should be included under separate headings for clear identification.

Example of test materials are:

· Data base inputs identified by media, file type, file identifier, record size, and other pertinent information. The source for each data base input should be clearly annotated information.

· Processing inputs identified by media, test sequence, usage, record size, source, and other

· relevant information.

· Support programs identified by media, name, type, and other relevant information.

· Test control programs or other special test programs with full identification.

· The modules, programs, etc., to be tested with proper identification.

· Test worksheet and other forms and instruction specifically prepared to control and expedite the test activity, identified by type and quantity.

· Test reporting forms which will be used to collect date for analysis of the test.

· Apparatus required during, or in support of, the test which is not normally part of the

· system. Apparatus would include extra peripherals (tape drives, printer, plotters, etc.). test message generators, test timing devices, test event recorders and performance measuring devices. Such apparatus will be by name, type, and quantity required.

· All required human subsystem testing apparatus.

9. Specific test objectives. List the individual, detailed objectives to be demonstrated by the test as derived from the performance requirements. It is critical that there be clearly defined specific objectives which are directly measurable. Poor testing is the predictable result of ill-defined objectives lacking measurable results.

10. Test conditions. Indicate whether the test is to be made, using the normal inputs (type, magnitude, or frequency) and data base, or whether a special set of exercise inputs and exercise data base is to be used, or some combination of these. Also, describe any unique organization or structure used in testing. For example, if the I/O modules are to be tested with dummy processing module, then this condition should be stated.

11. Extent of test. Indicate the extent of testing employed. Where total testing is not employed, the test requirements could be presented either as a percentage of some well-defined quantity, or as a number of samples of discrete operating conditions or values. Also indicate the rationale for adopting limited testing.

12. Test Progression. Indicate, in cases of progressive or cumulative tests, the manner in which progression is made from one test to another, so that the style of activity for each test is completely established. When a highly modular approach is used. It is important that testing of the modules together (after their separate tests are completed) be accomplished with the same thoroughness as the individual tests. Because the separate parts function as specified, it should never be assumed that they will continue to do so when linked together. Indicate whether each test step is to be performed without interruption for evaluation or whether each step is evaluated before testing continues.

13. Test constraints. Indicate the anticipated limitations imposed on any test due to system or test conditions. Such as limitations on timing, interfaces, equipment, personnel, and data base. If any of these constraints may affect the validity of the testing, then written agreements must be reach with the design group and users on the acceptability of these constraints.

14. Test data criteria. Describe the rules by which test results will be evaluated, via:

· Tolerances, range over which a data value output of a system performance parameter can vary and still be considered acceptable.

· Samples: the minimum number of combinations or alternatives of input conditions and output conditions that can be exercised to constitute an acceptable test of the parameters involved.

· Counts: the maximum number of interrupts, halts, or other system breaks which may occur due to non-test conditions.

Describe the technique to be used for manipulation of the raw test data into a form suitable for evaluation. The available techniques could include manual, automatic or semi-automatic means of reducing and validating data.

In all cases, it is imperative that the expected results be known of the item being tested. One cannot determine incorrect output unless the correct output is known.

15. Test inputs and outputs. List the input(s) with their associated data characteristics, which should include:

· Type of inputs (data rate, volume, period),

· Source,

· Media and device(s),

· Where they occur in testing.

List the output(s) from the test with their associated data characteristics, which should include:

· Type of output (data presentation, status indication),

· Transfer (data rate, volume, period),

· when and where produced in test.

16. Test conditions. List the conditions which exist with the test or a part of the input or output to the test. Conditions should include:

· Setting of controlling parameters for the test (minimum, maximums, thresholds),

· Enabling conditions for receipt or output of data,

· Priority conditions for processing of data.

· Availability of file data types and elements,

· Availability of data, control, and status messages from interacting functions not under test,

· Mode of system operation (normal, emergency),

· System configuration to include type of partitioning for a multiprogramming system.

Describe the manner in which input data are controlled in order to:

· Perform the test with a minimum of data types and values,

· Exercise the test with a range of bona fide data types and values which are for overload, saturation, and other worst case effects,

· Exercise the test with bogus data types and values which test for rejection or irregular inputs.

The use of a matrix format comparing input item against portion to be tested is recommended. During testing at some point:

· Every instruction should be executed,

· Every possible combinations of branch paths should be executed,

· Every data field should be tested with significant values.

Describe the manner in which output data are controlled in order to:

· Detect occurrence (or ultimate non-occurrence) of output data for indication of test completion.

· Record or identify permanent locations of output data for indication of test performance,

· Include temporary outputs not normally saved during operation,

· Evaluate outputs as a basis for continuation of test sequences,

· Evaluate test output against required output to assess performance of test.

17. Test set-up procedures. If not stated elsewhere, or by standard operating procedures, itemize the activities associated with the set-up of the computer facilities to conduct the test. Set-up activities encompass all routine and should be reviewed with the operations personnel to ensure that the instructions are clear and complete.

18. Test steps. Itemize the test(s) into test steps in test sequence order ( this includes special operations) via:

· Visual inspection of test conditions,

· Data dumps,

· Control area dumps,

· Instructions for data recording,

· Modifications of data base,

· Interim evaluation of test results.

19. Test termination. In test sequence order, itemize the activities associated with the termination of the test via:

· Read-out of critical data from indicators and locations for reference purposes.

· Termination of operations of time sensitive test support programs and test apparatus.

· Collection of system and operator records of test results,

· Disposition of input data and files,

· Disposition of intermediate and final outputs,

· Take down procedures,

· Dissemination of test output and documentation (filled out forms, etc.) to appropriate parties.

5.1.2 Products

· Detailed test plan (see Exhibits 5.1.-A-5.1-D).

· Operations testing procedures.

 

5.1.3 Background

· Subsystem test plan (3.8).

· Test schedule (3.8).

· Debugged programs (4.7).

· Job specifications (4.1).

 

Exhibit 5.1-A. Example of Test Milestone Chart.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

 

 

 

 

NOTES: This chart will be prepared for each level of testing.

a. The assigned test name and ID will be given. All tests will have a test identification number.

b. Time scale may be based on either a calendar system or on a relative time scale. The first versions will probably be in a relative time scale of weeks, based on 1st, 2nd, 3rd, etc., work weeks. The last, prepared before commencement of actual testing, will have actual calendar dates.

c. A coding system can be used:

S: Testing starts.

C: Testing continues.

F: Final date on which testing must be completed.

E: Expected time at which test will be complete.

This chart must be prepared in conjunction with the test dependencies chart.

 

 

 

Exhibit 5.1-B. Example of Test Data Function Chart.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Exhibit 5.1-C. Example of Program Test Plan

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Exhibit 5.1-D. Example of Test Dependency Matrix

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

 

 

 

 

 

 

 

NOTES: The purpose of this matrix is to graphically illustrate the dependencies between tests.

a, b. Test names and identifications in the same sequence. Sequence determined by test team leader.

c. Coded relationship:

Blank: Same test or independent.

T Test on right cannot begin before test on left has been completed and evaluated.

P Test on right can begin prior to completion of test on left but can not be completed until test on left has been completed and evaluated.

 

 

 

5.2 Prepare Site and Install Hardware and Facilities

In some instances, this activity may have occurred earlier in the design cycle, but this is the latest point in time when it should take place since testing should be done. If possible, on the actual implementation configurations. In some installations, all hardware and facilities procurement is made external to system development and then becomes a basic constraint on that development. This activity would not apply in this case.

A delay in the preparation of the site or installation of the hardware can disrupt the entire schedule. In many cases, new systems will require only the installation of some additional equipment at an existing site. This installation of additional equipment should be examined in detail to ensure that it does not cause an unexpected disruption.

For an existing installation, a checklist will be used which contains:

· Impact on storage areas,

· Potential overcrowding of processing area,

· Overloads of power supplies,

· Overloads of air conditioning,

· Equipment with more stringent requirements as to control of temperature, humidity, dust, or power supplies, than the existing equipment,

· Additional support equipment required,

· Space and work room for additional operator personnel,

· Additional work room for hardware maintenance personnel,

· Shielding against mutual interference between equipment.

The adding of a relatively minor amount of additional equipment can have a serious impact on the system if the existing equipment is in high load conditions: i.e. new disk drives causing channel overload.

If there is to be extensive replacement by one computer hardware configuration over a preceding one, the change-over can be very difficult, du to:

· Different characteristics of the two hardware configuration over a preceding one, the change over can be very difficult, due to:

· Phase over, other cutter, of operations from the old to the new.

The installation of a computer hardware configuration in a new site may involve the construction of a new building or special area in a building, or modification of an area previously used for other purposes.

The site must be fully ready to receive the equipment when it is available for installation. Particular care should be taken to ensure that the equipment is moved into the site without damage. Many items of equipment are vulnerable to damage when subjected to very low levels of overstrain, overheating, being moved through a dusty area, etc.

Hardware and operating system checkout should be the sole responsibility of the vendor, but vendor’s checkout is not normally sufficient. This activity is crucial to the future performance of the system and should take place under the direct supervision of qualified personnel from the purchasing organization. If qualified personnel are not available, it may be advisable to use an independent contractor who has experience with the hardware and software and who is able to provide an objective evaluation of the checkout.

The facilities considered here can range from minor equipment additions to existing facilities, to completely new hardware systems and up to the inclusion of new buildings.

The magnitude of this task is obviously related to the above parameters. For the minor equipment changes, simple and brief acceptance procedures may be used. For completely new and massive facilities, extensive acceptance procedures will be required over a much longer period.

5.2.1. Methods

1. Continuously monitor site construction and hardware delivery dates. Any changes in schedules must be presented to the development team at the earliest possible moment. No changes in configuration may be accepted without the concurrence of the developmental team. Any changes must be examined for their potential impact upon the system operation.

2. Prepare an agreed upon checklist between the construction contractor and installation team as to the items to be covered in the acceptance of the site. Each item will have defined measurable parameters. For instance:

· The soundproofing will be checked by test equipment, measuring sound absorption against a known source.

· Overland conditions will be tested on the power supplies and air conditioning.

· Flooring will be checked to established that it can take specified weights.

· Filter systems will be tested against known inputs, etc..

· All safety controls will be tested insofar as practicable by creating conditions to activate alarm sensors.

Prior to delivery of the equipment, the site will have been accepted at least as regards those areas in which the equipment is to be installed.

3. Implement detailed plans for equipment delivery, ensuring that:

· Adverse weather conditions will not damage equipment,

· Unloading equipment is available,

· Any required test equipment is available,

· The site is clean and all systems are functioning when the equipment is delivered. A written understanding will be reached with the hardware vendor(s) that equipment acceptance will be contingent upon the successful completion of hardware and software checkout.

4. Prepare report of site acceptance and equipment delivery for the developmental team and for the conversion and installation tea.

5. Prepare a test plan for hardware and system software checkout in detail and as agreed upon by both vendor and organization, including:

· Running the manufacturer provided test programs,

· Running "benchmark" programs resembling possible future developments,

· Running any system or subsystem modules previously prepared on other equipment,

· Running test data through the system, similar to that for use in actual operations (particularly for on-line data communication equipment),

· Joint agreements as to testing procedures between various vendors, if more than one vendor’s equipment is being installed,

· A check list of system features to be tested, form of test results, and an agreed upon method of evaluating those results,

· A special provision for testing unusual equipment configurations, equipment not in use for some time, and new or modified features of the operating system.

· provision for running final tests with the organizations personnel (not only on hardware and operating system software, but also for the training of operations personnel and directives for the training of operations personnel and directives for computing center operations).

6. Use not only the inputs from the vendor(s) while preparing the test plan, but also:

· If there is a group for the particular hardware, consult them for recommendations,

· Have their personnel ask colleagues in professional associations whether there are any problem areas which require particular attention.

· Review the professional literature for recommendations on testing new hardware and software.

7. The results of any portion of testing should be documented and reviewed following completion. in the event of unsatisfactory results being obtained, prepare a plan for re-testing.

8. Make appropriate evaluations such as temperature variation, sound proofing, etc., for environmental facilities.

9. Establish provision for routine maintenance. Maintenance of computer configuration may be done by the vendor(s) or other specialists. Equally important is proper maintenance of air-conditioning and other required support equipment.

10. Compare the resulting facilities with the requirements set forth at the time of the order.

11. Establish whether the operation of the facilities is within the limits set forth by the system objectives and performance criteria. If not, negotiations with the vendor are in order.

5.2.2 Products

· Delivered and installed equipment and facilities (report).

· Hardware, software, and physical facilities checkout (report).

5.2.3 Background

· System objectives (1.4).

· System performance criteria (1.4).

· Hardware requirements (2.11).

· Equipment order or reservation (2.11).

· Schedules for equipment delivery (2.11).

· Software specifications (2.12).

· Communications network layout (2.10).

· Hardware and facilities installation plan (2.13).

 

5.3 Determine Run Time Environment

In this activity the best operational arrangement for program modules will be developed. The control for this activity can include system generation modifications, run time job control procedures, and/or compilation and edit procedures.

Whatever combination is used the system designer must carefully consider the architecture of the system used (virtual memory, hierarchy of memories, size of cache memory, I/O characteristics, etc.) as well as specific software requirements such as:

· Real time transaction systems with background batch,

· Partition size,,

· Number of priorities used,

· Amount of processing required compared with I/O performed.

In any case the best source of information regarding the specific parameters which must be taken into account and the flexibility of the system are the manufactures manuals. These manuals are also a guide for evaluating trade-off involved in different approaches.

Additional functions which must be considered at this time include:

· Required accounting data,

· Device assignments,

· Core allocation,

· Job stream control,

· Utility linkage,

· Multiprogramming,

· Module run time limit,

· etc.

This is an ongoing activity in that the testing activities will tend to affect the size of the individual modules and the load module structure itself. In some instances, the load module structure and the job control will have to be modified many times before the testing is complete.

5.3.1. Methods

1. Based on allocated core size and requirements for the program control modules, determine how the various program modules must be linked together.

2. Develop the load module structure with prime consideration to:

· Sequence of processing,

· Potential repetitive processing of specific logic,

· Data pass characteristics,

· Frequency of use of specific logic,

· Length of time to load a module into core,

· Utility intervention requirements.

3. Code job control for all modules in the job streams. Be sure to include all steps within the job.

4. Code all linkages to system utilities. Be sure to invoke the right utility. Often, there are different utilities for similar activities with slight, but important differences.

5. Develop core allocation parameters. Be sure to reserve only that core which is necessary multi-programming may be occurring.

6. Develop processing time limits to avoid wasting computer time while in loops, etc..

7. Code file an I/O assignments including file cataloguing parameters. Make sure that all files are referred to correctly. Use "refer backs" to prior program modules whenever possible.

5.3.2 Products

· Object run job control (listed).

· Load modules. (see Exhibit 5.3-A)

5.3.3 Background

· Debugged programs (4.7).

· Vendor job control language specifications.

5.4 Test Training Courses, Work Aids, and Human Procedures

Previously developed training programs are tried out on groups of subjects in a practice training

environment in this activity. All courses to be used should have several practice dry runs in order to firm the content and smooth the presentation.

One of the required items is an agreed upon test(s) for any training program. To be able to evaluate any training it must be possible to test the results. A pair of tests before/after may be useful to measure the

results of training. It is desirable that the tests be oriented to the required behavior rather than to document and to prepare punching documents should be tested on actual inputs (selected for comprehensiveness) with a known standard based on actual worker averages.

For formal courses, this testing involves conducting practice sessions of the courses: formal training includes workshops. The two major benefits realized are the correction and improvement on the course material, and the development of style and expertise on the part of the instructions.

For on-the-job training, this involves the conducting of several training programs to determine whether the trainee can, infact, learn the required skills in this manner. Specification of ways to improve the on-the-job training will result from this activity. It must be noted that the individuals conducting on-the-job training will themselves need training.

Here, work aids will exercised to determine whether, in fact, they support the task in the manner anticipated. Suggested modifications and improvements should result from this activity.

All the above evaluations will recur throughout the life cycle of the training device. At this point, the particular device must be proved suitable for application to naive subjects.

In this activity, all human procedures completed in detail design are validated in a simulated work environment. Primarily, these procedures refer to alternative human response patterns in system operation. Although the specific details may differ, this validation is directly analogous to program testing. In general, validation should be done under the most realistic conditions possible, using operational personnel as test subjects. One prime purpose of this activity is to identify and correct human "bottlenecks". The level of testing required depends directly on the complexity of the system. However, testing required depends directly on the complexity of the system. However, regardless of the level, the actual conducting of these tests is extremely important. An other important feature of this testing is the fine tuning of the man/machine interface, such that subtle changes to a screen format, etc., may prove to have a strong benefit, from a human engineering standpoint, towards improving the overall human performance. This would normally follow directly from testing the training courses in that it can be an outgrowth of on-the-job training, and that trainees from those test training courses can be used to test the manual procedures.

5.4.1. Methods

1. For formal courses, choose people (for trainee roles) who are familiar with the subject being taught. They will be much more critical of the course than the real class would be, but will add enhancements to the material as well as help to perfect the presentation.

2. Revise the course as necessary, re-iterate the teaching cycle until acceptable course material and delivery evolve.

3. For on-the-job training, instruct the individuals who will do the training.

4. Pick non-experts to undertake the on-the-job training programs as trainees and conduct practice training programs.

5. Note the results of all training programs and formulate suggested improvements: re-iterate the process until suitable results are obtained.

6. For work aids, perform the steps as for on-the-job training until appropriate work aids are tested and modified.

7. Collect real data, where possible, to use in the human procedure testing. Where this is not possible generate data as close to real life as possible.

8. If necessary, develop a manual simulation laboratory for the conducting of the tests. This laboratory should be as close to the real environment as possible under the constraints of time and money. For small-scale systems, the establishment of a manual test laboratory is not necessary, but the larger and more complex the system is (with regard to the human subsystems), the more elaborate the manual testing will have to be. The conducting of the testing, whether on a large or small scale, will be a human simulation (with appropriate work aids) where the controllers of the simulations or test will play the role of the world external to the system. They will input, distribute, and process information in such a way as to stimulate and exercise the human subsystem. Where possible, actual computer processing may be used to assist the simulation environment. As the various simulations are conducted, the control (test) group will log and take note of the stimulus/response activities, and of the general accuracy, efficiency, and effectiveness of the processes being monitored.

9. Exercise all procedural practices, job descriptions, work aids, and manual forms to the level of determining whether they are really usable and effective. As in system testing, develop and present unusual and overload conditions to stress the system. Ensure the adherence to system objectives, and performance criteria.

10. Modify all procedures, job descriptions, work aids, training courses and manual forms on the basis of the test results and re-test all modifications until all problems are resolved.

11. Prepare initial training schedules for conversion and implementation.

5.4.2 Products

· Tested training courses (report).

· Tested on-the-job training programs (report).

· Approved work aids (report).

· Tested manual procedures (report) (see Exhibit 5.4-A).

· Training schedules for conversion and implementation.

 

5.4.3 Background

· Training courses (4.2).

· Procedural practices (3.1).

· Job specifications (4.1).

· Work aids (4.2).

· Manual operating forms (3.2).

 

Exhibit 5.4-A. Example of Report on Tested Manual Procedures.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

5.5 Build Test Data Base and Transaction Files

This is a critical activity because the entire testing of the automated system and future reliability of that system will probably directly relate to the adequacy of the test data used. There is, of course, considerable difference between developing test data for a large, complicated system and doing so for a small, simple system. For the complex case it could take a large team of specialists many months to develop adequate test data, while the simple case could involve only one man for a few days.

Quite often, the services of an automated test data generator are employed. The test data generator may be either an "in house" product, or it may have been advisable to look outside the company to determine if there was a package available to do the job at a reasonable cost. The use of such a device would be most appropriate for large systems efforts. If this is done, it should have been planned during a previous stage (3.8) as last minute decision in areas so critical as this could be dangerous. In any event, a test data generator, if used, will have to be carefully analyzed and the parameters fully understood before use. The generator itself will also require testing. Even if a test data generator is used for much of the system testing, certain tests will have to be run using live data since there is often some difference between what data are supposed to be and what they actually are.

Quite often, programs developed to do the conversion which builds the live data base will be used to build a sample of that base for testing purposes. The control information required by the data base design and the data management techniques being employed must be inserted into the transformed records as the conversion procedures and logic are subjected to testing. The importance of the test data base cannot be over-stressed. Enough data must be used to create the anticipated operational conditions and enough volume to cause overloading of the system. An inadequate test data base could lead to the acceptance of a system that could not survive under a loaded operational environment.

At the same time, test transaction files should be created. This can be accomplished through a test data generator or by collecting a representative sample of live transactions. As in building the data base, there must be enough transaction data to test the load capacity of the software and facilities, as well as to test every possible path through the system. Coordination between the test data to test the load capacity of the software and facilities, as well as to test every possible path through the system. Coordination between the test data base and the test transactions is required.

5.5.1. Methods

1. For all test data that are to be manually encoded, select the personnel to effect the encoding and instruct them, if necessary, in the procedures required. Begin the encoding process and monitor through to completion to ensure that the data conform to what is needed. This may apply both to data base type information and to what is needed. The may apply both to data base type information and to transaction type information. This can effectively test out the manual procedure of using transaction forms by utilizing these forms to create test transactions. People who will actually be involved in production should assist in preparing the test transactions.

2. For all test data that are to be extracted from live sources, establish the procedure, whether manual or automated, and begin the extraction process.

3. If a test data generator is uses, establish the parameters required for the process and test it out to ensure that proper data are being generated.

4. Build the test data base from the extracted or encoded data, organize it with the test data generator. Building the test data base may incorporate some of the system testing of maintenance functions in order to build up data content within a data base.

5. Check the test data base to determine:

· If there are enough data to represent all conditions that would be likely to occur and enough of which to apply the prescribed transactions,

· If there is at least one sample for each record type,

· if it contains all possible error conditions.

6. Document the contents, organization, and creation of the test data base.

7. Build the test transaction files from the extracted or encoded data, or generate them with the test data generator. Maintenance transactions should be created first, if needed, to build up test conditions within the data base. Control transactions are created, keeping mind conditions established by maintenance transactions. Every transaction path must be considered, including illogical ones, which would result in error conditions. In many cases, conversion is accomplished via transactions created from current files. These transactions can also be included in the transaction test file. In all cases, develop enough transactions to simulate overload conditions. Transaction files, representing multiple work days, must be built to simulate day-to-day production.

8. Check the test transaction files to determine whether:

· There are enough data to stress the system, both in variety of transactions and in load,

· All transaction types are represented and all transaction paths through the system are exercised,

· All possible errors are represented, including those that should never happen (they usually do in real life).

9. Document the contents and creation of the test transaction files.

5.5.2. Products

· Test data base (documented).

· Test transaction files (documented).

5.5.3. Background

· Data base specification (3.3).

· Data base conversion logic (3.3).

· Program logic flowcharts (3.6).

· Program decision tables (3.6).

· Utility requirements (3.7).

· Detail Design Report (3.9)

· Detailed test plans (5.1).

 

5.6 Test Subsystem/System

Subsystem testing includes a careful testing of a program in its environment, namely, the adjacent programs. In order to ensure that nothing has been overlooked, each program will process inputs from prior programs and pass its outputs to subsequent programs. All interfaces will be tested.

In the case of batch systems, this includes job stream testing. Testing a job stream involves exercising all the programs, in sequence, required to transform initial subsystem inputs into ultimate subsystem results.

For on-line or real time systems, the process is to test transaction path. Testing a transaction path involves exercising all the programs or processes required for a unique input transaction type.

For both types of testing several purposes are served, which are to:

· Check program to program interfaces,

· Verify the accuracy of a whole processing stream,

· Verify the adherence to the established performance criteria,

· Check all module calls,

· Check the job control.

This activity can be accomplished only when the subsystem can be assembled and operated as an organized entity. The objectives of subsystem testing are to test for weaknesses not found in program debugging and to assure the designers that the subsystem will function as required under conditions closely related to those encountered in the operational environment. Considerable amount of manual procedure testing can be executed within subsystem testing. Attempts are made at this point to "break" the system with overload and unusual conditions.

In this activity, the project team has to create and execute a test procedure to ensure that the completed subsystem can operate without disturbance, its performance is up to standard, and that it meet the objectives originally set forth. Depending on the size of the system being developed, the subsystem test could result in being the system test. The primary concern is whether all the programs and processes can function together smoothly and whether any incompatibilities exist. The work load will be as in a normal work day and will then be expanded to overload conditions.

Of explicit importance in this phase of testing is the verification of all internal subsystem interfaces.

Of special significance are the an/machine interfaces which have not been tested prior to this point.

Once the subsystems have been independently tested, they must be brought together into a system test. This step is concerned with the testing of all relationships between subsystems and of the relationship between the system and the outside environment.

System testing takes on two difference meanings, depending on whether the implementation is phased by subsystem or not.

In the case where the system is to be implemented as an entity, it must be tested as a complete entity before proceeding. This is, when subsystem testing is completed, all those subsystems will undergo an integrated testing effort.

In the case where the system is to be implemented in phases by subsystem, each new subsystem must only be tested relative to the subsystems that have been, or are being, implemented. This is, when each subsystem is internally sound (completed subsystem testing), it is tested in the environment of the subsystems which have already been implemented. In this event, only when the last subsystem within the system is undergoing system test can this test be considered complete in terms of the system designed in the Preliminary Design stage. The first approach would normally be used in a small systems effort, where the second would be used in a large effort.

5.6.1. Methods

1. Test individual job streams or transaction path. Close coordination among the programmers is required. It is not simply a matter of whether the new data will "blow up" a program or not. The same exhaustive examination of output produced must be accomplished as for the program test activity. Additional files and transactions may be produced using a test data generator, but, wherever possible, manual procedures should be utilized to further satisfy the primary objective of cross-checking the manual versus EDP interpretation of the system being developed. In addition to the application logic, all logic and procedures involved in control and security, fallback, reconstruction, recovery and administration must be tested in a controlled manner during this activity. These items require careful planning and, in some cases, special talent to produce meaningful results.

2. Verify that the following conditions are satisfied:

· All input transaction types have been processed through the program control modules and related worker modules,

· The inputs developed using the manual procedures produce acceptable results.

· All documents (visual or printed) produced by the system are complete, accurate, and properly interface wit the manual procedures.

· All logic and procedural problems have been corrected,

· All processing conforms to the specified performance criteria,

· The test team is satisfied that the processes are ready for subsystem testing.

3. Test the individual subsystems. Except for using test input and test files, subsystem testing should be run as it if were in actual operation. People who will actually be involved in the production runs of the system should participate. The programmers and designers should be on hand to provide assistance in actual run preparation and for rapidly locating the cause of errors. The output of the subsystem test should be used for testing the related manual procedures. Timing runs should be made using input volumes representing typical run cycles.

4. Conduct capacity testing to determine the level and fluctuations of system performance from minimum to maximum system loads throughout the range of rated capacity.

5. Test the compatibility of the various subsystem components working together.

6. Accomplish design verification by ensuring that all outputs reflect the system objectives and performance criteria described in previous stages.

7. Prove reliability by running multiple cycles, representing day-to-day work flow.

8. Test fallback, recovery, and reconstruction procedures to ensure that they are in effect.

9. Test load module arrangements for maximum performance under ideal and controlled conditions.

10. Test the subsystems together in a system test. In a phased development, the new subsystem can sometimes be system-tested in the live environment, through extreme care should be used not to compromise existing operations. The first tests should be at low volume loads when the correctness of the subsystem interfaces are evaluated.

11. Stress test the system by bringing it up to the normal production level to ensure the normal production level to ensure the normal integrated processing capabilities. Within these tests, the prime concerns are these:

· Are the subsystems compatible?

· Does subsystem fluctuations cause fluctuations in system performance?

· Does the system perform reliably?

· Does any subsystem contribute to system degradation?

· Does any subsystem adversely affect fallback and recovery?

· Is the system performance within the established criteria and are the systems objectives being met?

· Does any subsystem cause system degradation?

· Do run time statistics gathered provide sufficient information to analyze the system performance to the degree required?

12. Expand the test load to approximate system overload conditions. Observe the reactions of all subsystem operations relative to the prime concerns expressed in 11.

13. Operate all tests over a suitable time period to allow proper observation of the results.

14. Direct any efforts required to re-design, re-program or re-test any portion of the system found deficient.

15. Report on and make recommendations based on the subsystem tests and system test.

5.6.2 Products

· Test job streams or transaction paths (report).

· Transaction path analysis (see Exhibit 5.6-a).

· Tested subsystems (report).

· Tested system (report) (refer to Exhibit 5.8-A).

5.6.3 Background

· System objectives (1.4).

· Performance criteria (1.4).

· Test data base (5.5).

· Test transaction files (5.5).

· Detailed test plan (5.1).

· Job specifications (4.1).

· Debugged programs (4.1).

· Tested manual procedures (5.4).

 

Exhibit 5.6-A. Example of a Transaction Path Analysis Form.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(To be supplied)

 

 

 

 

 

 

NOTES: The purpose of this report is to diagrammatically present the flow of a transaction through the system.

a. The transaction ID will vary depending on the type of system. It may be tghe key field ID of an input card or set cards. The type of transaction should be specified with the ID.

b. All the modules/programs to be tested by the transaction should be listed. Where a transaction tests all the modules in a program, the individual idenitiers may be omitted.

c. The intersection codes will include:

· A sequential number to indicate which element is entered in which sequence.

· A processing code to indicate function of module in regard to that transaction:

X: Enters the module for control purposes only.

E: Edit.

U: Updates fil with transaction information.

R: Uses information from transaction to prepare immediate report.

F: Formats input into new structure for futher processing.

One or more of the above can occur for a module/program.

 

 

5.7 Perform Acceptance Test

This constitutes a verification to the user that the system meet the system objectives and performace criteria, and is operationally sound.

Accceptance testing can lead to unnecessary problems if the parameters of the acceptance test have not been clearly defined. Acceptance test requirements were first specified in the Definition Study, were expanded in Preliminary Design, and were finalized in Detail Design. A typical type of difficulty is when the user asserts that the system does not do "X" and the designers and developers claim that the system was never supposed to be able to do "X". Early thought about the acceptance test can avoid these problems to a large extent.

Acceptance testing, like system implementation, amy ba phased or occur as one large formal test. As phased implementation, a phased test must not only demostrate the proper functioning of newly released functions or subsystems, but also demostrate the continued integrity of the system as a whole. A phased acceptance test has the advantages that:

· The user is given a phased demostration of progress,

· The user retains a greater degree of control,

· The load of acceptance testing on the user organization is more distributed over time,

· The user has a large number of decision points of a smaller size thus easing the user decision process.

In many large systems the process of conversion and implementation is so large that it is worth the additional effort and expenditure required to perform a pre-implementation acceptance test. This may be done by using systems exerciser or simulation methods. It will also train personnel and in general create a simulated working environment that relfects the real environment as closely as possibe.

This activity could be satisified in a number of ways. Normally, a parallel run with the current production methods would be appropriate. Whatever the procedure, it should be designed to do the following:

· Fit the system to the physical environment.

· Train operational personnel in operating methods and manual procedures.

· Verify that the system performs in the operating environment as defined in the design specifications.

It is imperative that the user arrange and conduct his own acceptance test. This test should take place following the systems tests performed by the design and programming teams. It should be noted that an acceptance test may be conducted for the system as a whold or for any specific subsystem, depending on the degree to which the subsystems are phased.

Part of acceptance testing should be running the system during the timeframes proposed for its normal operation so that the impact on the computer center operatons can be evaluated as soon as possible. Prior to this time all development and testing will probably have been conducted in such a way as to minimize impact on production work, and the impact of the system estimated by the designers. Now is the opportunity to see what problems may be caused by running the system during the actual framework. Prior to this proint, estimates have been made concerning the effect of running the system in a production environment but now these estimates are validated.

Successful completion of the acceptance tests is the main criterion for final turnover of the system for implementation. The acceptance tests should therefore be through and complete. No part of the system should be implemented before the acceptance test has been completed. The resources used in conduting properly organized and comprehensive acceptance tests are well invested, because they lessen the possibility of trouble occurring after system turnover and verify to the user that the system performs as advertised.

The test will identify differences between what was expected of the system and what is actually does. Each unplanned difference should be examined and resolved. If the subsystem test were trhorugh, the number of differences should be minimal.

The number and effect of differences on the system can be considered as a direct reflection on any, or all, the activities which preceded this activity. The experience during the acceptance test should be reviewed and resolutions explained. During this period, future plans for the introduction of the system in the company can be reviewed and recommendations made.

5.7.1. Methods

1. The user must develop an acceptance procedure designed to test not only the final system, but all documentation delivered during the development of the system. This acceptance procedure must be mutually agreed upon both by the user and the developer. If a parallel run is not in order, the acceptance thest should be more closedly observed and monitored by the user of the system.

2. Augment live data with artificial transactions which will test the extreme high or low limits of quantities or sizes.

3. Process all data errors, flagges by validation programs, through the clerical review and correction procedures, then back again through the system.

4. Use pre-defined results where possible.

5. Distribute management and statistical report output to useres and request a written review from them as to content, effectiveness, and correctness.

6. Ensure that system objectives and performance criteria have been met. Identify all system or subsystem inadequacies; prompt decisons must be made in order to resolve differences.

7. Re-cycle solutions to major differences through the testing activities.

8. Make changes to documentation immediately.

9. Give an appointed group of people the task of resolving system inadequacies in order to obtain a thorough examination, as well as the best solution to any problems.

10. Correct inadequacies at the lowest possible level in the company organization and as quickly as possible.

11. Prepare a written report citing all inadequacies found in the acceptance test, together with a description of they were solved. The responsible person must approve this report, and, in so doing, accept the system. It is the role of the system developers to prove the adequacy of the system. Acceptance testing is completed when the user is prepared to convert full to the new system and agrees that it meets the design objectives.

5.7.2 Products

· List of acceptance test discrepancies.

· Accepted/Rejected system (report) (refer to Exhibit 5.8-A).

5.7.3 Background

· Tested (sub) system (5.6).

· System objectives (1.4).

· Performance objectives (1.4).

· Acceptance criteria (1.9).

 

5.8 Prepare Program and Human Job Development Report

The test report is prepared when all activities are complete. The report should follow the format established for reporting to management. All changes to the system and discrepancies should be reviewed and the impact on the system evaluated. All documentation should be updated.

Prior to this activity, all the products are considered as being working papers and are highly subject to change. The test result report is a final, technical report that should reflect the test results in a formal, concrete way: changes should be few and made only after careful consideration.

The report should be processed for review by all affected parties and submmitted fro managerial approval. Management will:

· Approve implementation and conversion,

· Request modifications and changes, or

· Rostpone or re-direct the effort.

5.8.1. Methods

1. Appoint a review committee made up of representatives from each responsible group. Each member should express his evaluations of the completed system in draft form.

2. Review all valid design changes or discrepancies and ensure that:

· They are still consistent with the system objectives.

· They have caused no significant performance degradation.

3. Review and, where necessary, modify the results of any previous stages to reflect the proposed changes.

4. Update the overall plan and re-compute milestones.

5. Updata all cost estimates.

6. Produce an outline for the contents of the final reports according to company standards. The outline should allow for the distribution of the work on sections of the report.

7. Plan the production of the various components of the report.

8. Produce a draft of the report. Depending on the size and complexity of the system, the size of the report may vary between a few, concise pages to an extensive publication of many volumes.

9. Carefull edit and publish the final report expressing the final resolution of the system and including a synopsis of all activities to date for submission to management. Ensure that the presentation is of high quality and represents a professional approach.

10. Provide a briefing for the user, management, and other interested parties in order to assist in the clearing up of any residual confusion.

5.8.2 Products

· Testing stage results (report) (see Exhibit 5.8-A)

5.8.3 Background

· All testing activities results

Exhibit 5.8-A. Testing Stage Results Report Outline.

This analysis will be performed for testing as a whole and for each identifiable test. For individual tests, this information will form the basis for making corrections to system components, and for modifying test plans for tests not yet conducted. The complexity of the analysis will depend on the level being tested and on the nature of any problems which may have arisen. Suggested contents of this report now follow.

Title sheet

Management summary

Table of contents

Introduction

Test I/O performance. Compare I/O performance of the tests with I/O capabilities as described in the test plan (where applicable). Projections may need to be made to see whether an extrapolation of the actural times for full volum loads is still within the specification determined during system design. Manual procedures, as well as computer inputs and outputs are evaluated here.

Test parameter processing. Compares the parameter processing of the tests with the parameter processing described in the test plan.

System function performance. Describes the functional capability as demostrated in one or more system/subsystem tests. Also assesses the manner in which the test environment may differ from the operational environment and its effect on functional capability.

System performance limits. Describes the range of data and parameter values tested. Also identifies any functional deficiencies, limitations, or constraints inherent in the system detected during the testing process.

Demostrated capability. A general statement on the capability of the system/subsystem as demostrated by the test, and compared with the performance requirements contained in the system functional description. For complex systems, a discussion of conformance with specific requirements will be included.

All the performance requirements established in Preliminary Design will be reviewed in conjuction with test results to see that the sysem functions in accordancw with the initial specifications (as amended).

Deficiency identification. An individual statement for each deficiency detected during testing, to be determined by a comparison of actual results with expected retults. A clear statement as to the nature of the deficiency in precise terms is required. Once identified, the source(s) of the deficiency must be traced in detail. An intutive "hunch" as to the cause of the deficiency is not sufficient. A precise and detailed logical analysis must show why the deficiency occurred.

Deficiency resolution: A resolution must be described for all detected deficiencies. In some cases, the deficiency may have been corrected and, in others, the test team and design group with user participation may have determined that the deficiency would not be corrected before acceptance. Parameters influencing the decision are:

· The impact of the deficiency on system performance if not corrected.

· The effort expected in correcting the deficiency,

· The expected effect on schedules, etc. if time and effort is expended in correcting the deficiency.

 

Exhibit 5.8-A. Testing Stage Results Report Outline (Continued)

Action taken: If a deficiency has been corrected, extreme care must be taken to ensure that:

· Any changes made are reflected in updated documentationn,

· The correction of the deficiency did not, in itself, cause more problems that it solved,

· The testing cycle was re-iterated to test the correction,

· any schedule changes were made,

· the original design was ot compromised in any way.

System refinements:

· There may ;be deficiencies which were not correvted prior to turnover and which remain as defects in the system. These should be throughly documented including the proposed solutions, expected costs of the solutions and extent to which system performance is degraded.

· An itemization of improvements which can be realized in system design or operation should be given, as determined during the test period. Each suggested improvement should be accomplished by a discussion on the potential added capability, expected impact on system design, and expected effort required to accomplish the improvement.

Lessons learned. All testing is a learning process and this knowledge should be documented in such a way as to assist further tests and system design efforts. In particular:

· For each deficiency, state how the problem could have been avoided by improvements to preliminary design, detailed design, or development. These facts should be stated in terms of specific methodological or other recommendations.

· The same approach should be used for refinements as that used for deficiencies.

· The testing approach and testing tools being used should for deficiencies. The testing approach and testing tools being used should be examined for effectiveness and recommendations made for their improvement. This is particularly the case when a deficiency is identified later in testing that, it wuld have been had ealier tests been properly conducted.