Chapter 12. Step 6: Acceptance and Operational Testing

Acceptance testing is formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable the buyer to determine whether to accept the system. Software acceptance testing at delivery is usually the final opportunity for the buyer to examine the software and to seek redress from the developer for insufficient or incorrect software. Frequently, the software acceptance test period is the only time the buyer is involved in acceptance and the only opportunity the buyer has to identify deficiencies in a critical software system. (The term critical implies economic or social catastrophe, such as loss of life; as used in this chapter, it implies the strategic importance to an organization’s long-term economic welfare.) The buyer is thus exposed to the considerable risk that a needed system will never operate reliably (because of inadequate quality control during development). To reduce the risk of problems arising at delivery or during operation, the buyer must become involved with software acceptance early in the acquisition process.

Overview

At the conclusion of Step 5, developers and testers have tested the system and reported their conclusions. If the report follows the proposed format, it will list not only strengths and weaknesses but also recommendations. The customer/users of the software have one of three decisions to make:

  1. The software system is usable as is and can be placed into a production state.

  2. The software has some deficiencies; when corrected, the software can be placed into an operational state.

  3. The software is severely deficient and it may or may not ever be placed into an operational state depending on the type of deficiencies, the alternatives available to the customer/users, and the cost to correct the deficiencies.

The user may or may not want to conduct acceptance testing before making one of these three decisions. If the decision is to conduct acceptance testing, acceptance testing would occur and the results of the acceptance testing plus the tester’s report would be input to making one of the preceding three decisions.

The tested software system at the conclusion of Step 5 should be ready to move to an operational state. This does not mean that all the requirements are implemented, or that it does not include defects, but rather, it’s at a decision point regarding placing the software into operation. This decision point occurs for both the initial version of the software system as well as changed versions of the software system. Normally, moving the initial version into an operational state is more complex than moving a changed version into an operational state.

Testers need to be heavily involved in ensuring that the software as tested can be moved effectively into an operational state. The activities performed vary based on the risks associated with placing the software into production.

Acceptance testing is designed to determine whether the software is “fit” for the user to use. The concept of fit is important in both design and testing. Design must attempt to build the application to fit into the user’s business process; the test process must ensure a prescribed degree of fit. Testing that concentrates on structure and requirements may fail to assess fit, and thus fail to test the value of the automated application to the business. The four components of fit are as follows:

  • Data. The reliability, timeliness, consistency, and usefulness of the data included in the automated application

  • People. The skills, training, aptitude, and desire to properly use and interact with the automated application

  • Structure. The proper development of application systems to optimize technology and satisfy requirements

  • Rules. The procedures to follow in processing the data

The system must fit into these four components of the business environment. If any of the components fails to fit properly, the success of the application system will be diminished. Therefore, testing must ensure that all the components are adequately prepared and/or developed, and that the four components fit together to provide the best possible solution to the business problem.

Objective

The objective of acceptance testing is to determine throughout the development cycle that all aspects of the development process meet user needs. There are many ways to accomplish this. The user may require that the implementation plan be subject to an independent review, of which the user may choose to be a part or just input acceptance criteria into the process.

Acceptance testing should not only occur at the end of the development process but as an ongoing activity that tests both interim and final products, so that unnecessary time is not expended making corrections that prove unacceptable to the system user.

The overall objective of testing for software changes is to ensure that the changed application functions properly in the operating environment. This includes both the manual and automated segments of the computerized application. The specific objectives of this aspect of testing include the following:

  • Develop tests to detect problems prior to placing the change into production.

  • Correct problems prior to placing the change in production.

  • Test the completeness of training material.

  • Involve users in the testing of software changes.

Concerns

When considering acceptance testing, users must be aware of the following concerns:

  • Acceptance testing must be integrated into the overall development process.

  • Cost and time for acceptance testing will not be available.

  • The implementers of the project plan will be unaware of the acceptance criteria.

  • The users will not have the skill sets needed to perform acceptance testing.

Typically, the user/customers conduct acceptance testing only for the initial release of the software system. However, if extensive changes are made to the software system, the customer/user may repeat this task before a new version of the software is placed into an operational state.

Pre-operational testing involves ensuring that the software that operated effectively and efficiently in the test environment still operates efficiently and effectively in the production environment. Post-operational testing is involved in testing changes made to the new system that will create new operational versions of the software system.

Placing the initial version of the software into an operational state may involve all three tasks because the movement of the software into an operational state identifies defects that must be corrected. When new versions are created that incorporate changes in the software, normally just pre-operational and post-operational testing need to be performed.

The installation phase testing does not verify the functioning of the application system, but rather the process that places that application system into a production status. The process is attempting to validate the following:

  • Proper programs are placed into the production status.

  • Needed data is properly prepared and available.

  • Operating and user instructions are prepared and used.

  • An effective test of the installation phase cannot be performed until the results expected from the phase have been identified. The results should be predetermined and then tests performed to validate that what is expected has happened. For example, a control total of records updated for installation might be determined, and then an installation phase test would be performed to validate that the detailed records in the file support the control total.

IT management should be concerned about the implementation of the testing and training objectives. These concerns need to be addressed during the development and execution of the testing and training for software changes. The first step in addressing control concerns is identifying the concerns that affect these software changes:

  • Will the testing process be planned? Inadequate testing is synonymous with unplanned testing. Unless the test is planned, there is no assurance that the results will meet change specifications.

  • Will the training process be planned? People rarely decide on the spur of the moment to hold a training class or develop training material. What tends to happen is that training is given one on one after problems begin to occur. This is a costly method of training.

  • Will system problems be detected during testing? Even the best training plans rarely uncover all the potential system problems. What is hoped is that the serious problems will be detected during testing.

  • Will training problems be detected during testing? How people will react to production situations is more difficult to predict than how computerized applications will perform. Thus, the objective in training should be to prepare people for all possible situations.

  • Will already-detected testing and training problems be corrected prior to the implementation of the change? An unforgivable error is to detect a problem and then fail to correct it before serious problems occur. Appropriate records should be maintained and controls implemented so that detected errors are immediately acted on.

Workbench

The acceptance testing workbench begins with software that has been system tested for the system specifications. The tasks performed in this step lead to an acceptance decision, which does not necessarily mean that the software works as desired by the user, or that all problems have been corrected; it means that the software user is willing to accept and use the software in its current state. The acceptance test workbench is illustrated in Figure 12-1.

Acceptance testing workbench.

Figure 12-1. Acceptance testing workbench.

Input Procedures

The three inputs to Task 1 are as follows:

  • Interim work products

  • Tested software

  • Unresolved defect list

Task 2, the installation phase, is the process of getting a new system operational. The process may involve any or all of the following areas:

  • Changing old data to a new format

  • Creating new data

  • Installing new and/or change programs

  • Updating computer instructions

  • Installing new user instructions

The installation process may be difficult to execute within the time constraints. For example, many system installations are performed over a weekend. If the installation cannot be successfully completed within this two-day period, the organization may face serious operational problems Monday morning. For this reason, many organizations have adopted a fail-safe method. They pick a deadline by which the new system must be successfully installed; if it is not, they revert back and use the old system.

Much of the test process will be evaluating and working with installation phase deliverables. The more common deliverables produced during the installation phase include the following:

  • Installation plan

  • Installation flowchart

  • Installation program listings and documentations (assuming special installation programs are required)

  • Test results from testing special installation programs

  • Documents requesting movement of programs into the production library and removal of current programs from that library

  • New operator instructions

  • New user instructions and procedures

  • Results of installation process

Testers need four inputs to adequately perform testing on a changed version of software, as follows:

  • Change documentation

  • Current test data/test plan

  • Changed version of software

  • Prior test results

Task 1: Acceptance Testing

Software acceptance is an incremental process of approving or rejecting software systems during development or maintenance, according to how well the software satisfies predefined criteria. In this chapter, for the purpose of software acceptance, the activities of software maintenance are assumed to share the properties of software development. “Development” and “developer” include “maintenance” and “maintainer.”

Acceptance decisions occur at pre-specified times when processes, support tools, interim documentation, segments of the software, and finally the total software system must meet predefined criteria for acceptance. Subsequent changes to the software may affect previously accepted elements. The final acceptance decision occurs with verification that the delivered documentation is adequate and consistent with the executable system and that the complete software system meets all buyer requirements. This decision is usually based on software acceptance testing. Formal final software acceptance testing must occur at the end of the development process. It consists of tests to determine whether the developed system meets predetermined functionality, performance, quality, and interface criteria. Criteria for security or safety may be mandated legally or by the nature of the system.

Defining the Acceptance Criteria

The user must assign the criteria the software must meet to be deemed acceptable. (Note: Ideally, this is included in the software requirements specifications.) In preparation for developing the acceptance criteria, the user should do the following:

  • Acquire full knowledge of the application for which the system is intended.

  • Become fully acquainted with the application as it is currently implemented by the user’s organization.

  • Understand the risks and benefits of the development methodology that is to be used in correcting the software system.

  • Fully understand the consequences of adding new functions to enhance the system.

Acceptance requirements that a system must meet can be divided into these four categories:

  • Functionality. Internal consistency of documents and code and between stages; traceability of functionality; adequate verification of logic; functional evaluation and testing; preservation of functionality in the operating environment.

  • Performance. Feasibility analysis of performance requirements; correct simulation and instrumentation tools; performance analysis in the operating environment.

  • Interface quality. Interface documentation; interface complexity; interface and integration test plans; interface ergonomics; operational environment interface testing.

  • Overall software quality. Quantification of quality measures; criteria for acceptance of all software products; adequacy of documentation and software system development standards; quality criteria for operational testing.

Assessing the criticality of a system is important in determining quantitative acceptance criteria. By definition, all safety criteria are critical; and by law, certain security requirements are critical. Some typical factors affecting criticality include the following:

  • Importance of the system to organization or industry

  • Consequence of failure

  • Complexity of the project

  • Technology risk

  • Complexity of the user environment

For specific software systems, users must examine their projects’ characteristics and criticality to develop expanded lists of acceptance criteria for those software systems. Some of the criteria may change according to the phase of correction for which criteria are being defined. For example, for requirements, the “testability” quality may mean that test cases can be developed automatically.

The user must also establish acceptance criteria for individual elements of a product. These criteria should be the acceptable numeric values or ranges of values. The buyer should compare the established acceptable values against the number of problems presented at acceptance time. For example, if the number of inconsistent requirements exceeds the acceptance criteria, the requirements document should be rejected. At that time, the established procedures for iteration and change control go into effect.

Work Paper 12-1 is designed to document the acceptance criteria. It should be prepared for each hardware or software project within the overall project. Acceptance criteria requirements should be listed and uniquely numbered for control purposes. After defining the acceptance criteria, a determination should be made as to whether meeting the criteria is critical to the success of the system.

Table 12-1. Acceptance Criteria

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Hardware/Software Project

The name of the project being acceptance-tested. This is the name the user/customer calls the project.

Number

A sequential number identifying acceptance criteria.

Acceptance Requirement

A user requirement that will be used to determine whether the corrected hardware/software is acceptable.

Critical

Indicate whether the acceptance requirement is critical, meaning that it must be met, or noncritical, meaning that it is desirable but not essential.

Test Result

Indicates after acceptance testing whether the requirement is acceptable or not acceptable, meaning that the project is rejected because it does not meet the requirement.

Comments

Clarify the criticality of the requirement; or indicate the meaning of test result rejection. For example, the software cannot be run; or management will make a judgment after acceptance testing as to whether the project can be run.

Hardware/Software Project: ________________________________________

       

Number

Acceptance Requirement

Critical

Test Result

Comments

Yes

No

Accept

Reject

       
       
       
       
       
       
       
       
       
       

Developing an Acceptance Plan

The first step to achieve software acceptance is the simultaneous development of a software acceptance plan, general project plans, and software requirements to ensure that user needs are represented correctly and completely. This simultaneous development will provide an overview of the acceptance activities, to ensure that resources for them are included in the project plans. Note that the initial plan may not be complete and may contain estimates that will need to be changed as more complete project information becomes available.

After the initial software acceptance plan has been prepared, reviewed, and approved, the acceptance manager is responsible for implementing the plan and for ensuring that the plan’s objectives are met. It may have to be revised before this assurance is warranted.

Table 12-1 lists examples of information that should be included in a software acceptance plan.

Table 12-1. Acceptance Plan Contents

Project Description

Type of system; life cycle methodology; user community of delivered system; major tasks system must satisfy; major external interfaces of the system; expected normal usage; potential misuse; risks; constraints; standards and practices.

User Responsibilities

Organization and responsibilities for acceptance activities; resource and schedule requirements; facility requirements; requirements for automated support, special data, training; standards, practices, and conventions; updates and reviews of acceptance plans and related products.

Administrative Procedures

Anomaly reports; change control; recordkeeping; communication between developer and manager organizations.

Acceptance Description

Objectives for entire project; summary of acceptance criteria; major acceptance activities and reviews; information requirements; types of acceptance decisions; responsibility for acceptance decisions.

The plan must include the techniques and tools that will be utilized in acceptance testing. Normally, testers will use the organization’s current testing tools, which should be oriented toward specific testing techniques.

Two categories of testing techniques can be used in acceptance testing: structural and functional. (Again, acceptance testing must be viewed in its broadest context; it should not be the minimal testing that some users perform after the information system professionals have concluded their testing.)

The functional testing techniques help ensure that the requirements/specifications are properly satisfied by the software system. Functional testing is not concerned with how processing occurs, but with the results of processes.

Structural testing ensures sufficient checking of the implementation of the function by finding test data that will force sufficient coverage of the structured presence in the implemented software. It evaluates all aspects of this structure to verify that the structure is sound.

Executing the Acceptance Plan

The objective of this step is to determine whether the acceptance criteria have been met in a delivered product. This can be accomplished through reviews, which involve looking at interim products and partially developed deliverables at various points throughout the developmental process. It can also involve testing the executable software system. The determination of which (or both) of these techniques to use will depend on the criticality of the software, the size of the software program, the resources involved, and the time period over which the software is being developed.

Software acceptance criteria should be specified in the formal project plan. The plan identifies products to be tested, the specific pass/fail criteria, the reviews, and the types of testing that will occur throughout the entire life cycle.

Acceptance decisions need a framework in which to operate; items such as contracts, acceptance criteria, and formal mechanisms are part of this framework. Software acceptance must state or refer to specific criteria that products must meet to be accepted. A principal means of reaching acceptance in the development of critical software systems is to hold a periodic review of interim software documentation and other software products.

A disciplined acceptance program for software of any type may include reviews as a formal mechanism. When the acceptance decision requires change, another review becomes necessary to ensure that the required changes have been properly configured and implemented, and that any affected segments are acceptable. For large or complicated projects, several reviews may be necessary during the development of a single product.

Some software acceptance activities may include testing pieces of the software; formal software acceptance testing occurs at the point in the development life cycle when the user accepts or rejects the software. This means a contractual requirement between the user and the project team has been met. Rejection normally means additional work must be done on the system to render it acceptable to the user. Final software acceptance testing is the last opportunity for the user to examine the software for functional, interface, performance, and quality features prior to the final acceptance review. The system at this time must include the delivered software, all user documentation, and final versions of other software deliverables.

Developing Test Cases (Use Cases) Based on How Software Will Be Used

Incomplete, incorrect, and missing test cases can cause incomplete and erroneous test results, which, at minimum, means that rework is necessary, and at worst, means that a flawed system is developed. It is necessary to ensure that all required test cases are identified so that all system functionality requirements are tested.

A use case is a description of how a user (or another system) uses the system being designed to perform a given task. A system is described by the sum of its use cases. Each instance or scenario of a use case will correspond to one test case. Incorporating the use case technique into the development life cycle will address the effects of incomplete, incorrect, and missing test cases. Use cases represent an easy-to-use approach applicable to both conventional and object-oriented system developments.

Use cases provide a powerful means of communication between customer, developers, testers, and other project personnel. Test cases can be developed with system users and designers as the use cases are being developed. Having the test cases this early in the project provides a baseline for the early planning of acceptance testing. Another advantage to having test cases early on is that if a packaged software solution is indicated, the customer can use them to evaluate purchased software earlier in the development cycle. Using the use case approach will ensure not only meeting requirements but also expectations.

Building a System Boundary Diagram

A system boundary diagram depicts the interfaces between the software being tested and the individuals, systems, and other interfaces. These interfaces or external agents in this work practice will be referred to as “actors.” The purpose of the system boundary diagram is to establish the scope of the system and to identify the actors (i.e., the interfaces) that need to be developed.

An example of a system boundary diagram for an automatic automated teller machine for an organization called “Best Bank” is illustrated in Figure 12-2.

System boundary diagram for an automated teller machine (ATM) example.

Figure 12-2. System boundary diagram for an automated teller machine (ATM) example.

Work Paper 12-2 is designed to document a system boundary diagram for the software under test. For that software each system boundary needs to be defined. System boundaries can include the following:

  • Individuals/groups that manually interface with the software.

  • Other systems that interface with the software.

  • Libraries.

  • Objects within object-oriented systems.

  • Each system boundary should be described. For each boundary, an actor must be identified.

Table 12-2. System Boundary Diagram

Software Under Test: ________________________________________________

    

System Boundary

Boundary Description

Actor Description

Name of Individual/Group Representing Actor

    
    
    
    
    
    
    
    
    
    
    
    
    
    
    

Two aspects of actor definition are required. The first is the actor description, and the second is the name of an individual or group who can play the role of the actor (i.e., represent that boundary interface). For example, in Figure 12-2 the security alarm system is identified as an interface. The actor is the security alarm company. The name of a person in the security alarm company or the name of someone who can represent the security alarm company must be identified. Note that in some instances the actor and the individual may be the same, such as the ATM system administrator listed in Figure 12-2.

Defining Use Cases

An individual use case consists of the following:

  • Preconditions that set the stage for the series of events that should occur for the use case

  • Post-conditions that state the expected outcomes of the preceding process

  • Sequential narrative of the execution of the use case

  • Use cases are used to do the following:

  • Manage (and trace) requirements

  • Identify classes and objects (OO)

  • Design and code (non-OO)

  • Develop application documentation

  • Develop training

  • Develop test cases

The use case definition is done by the actor. The actor represents the system boundary interface and prepares all the use cases for that system boundary interface. Note that this can be done by a single individual or a team of individuals.

Work Paper 12-3 is used for defining a use case. An example of a completed Work Paper 12-3 for an ATM system is illustrated in Figure 12-3. This example is for an ATM system. The case is a bank customer making a withdrawal from their checking account on an ATM.

Table 12-3. Use Case Definition

Last Updated By:

Last Updated On:

Use Case Name:

UC ID:

Actor:

Objective:

Preconditions:

  

Results (Postconditions):

  

Detailed Description

Action

Model (System) Response

1.

1.

  

2.

2.

  

3.

3.

  

4.

4.

  

5.

5.

  

Exceptions:

 
  

Alternative Courses:

 
  

Original Author:

Original Date:

Table 12-3. Example of completed Work Paper 12-3 (use case definition) for an ATM system.

Use Case Definition

Last Updated By:

Last Updated On:

Use Case Name: Withdraw From Checking

UC ID: ATM-01

Actor: Bank Customer

 

Objective: To allow a bank customer to obtain cash and have the withdrawal taken from their checking account.

Preconditions: Bank customer must have an ATM cash card, valid account, valid PIN and their available checking account balance must be greater than, or equal to, withdrawal amount. ATM in idle mode with greeting displayed (main menu).

Results (Postconditions): The cash amount dispensed must be equal to the withdrawal amount. The ATM must print a receipt and eject the cash card. The checking account is debited by amount dispensed.

Detailed Description

Action

Model (System) Response

  1. Customer inserts ATM cash Card.

  2. Customer enters PIN.

  3. Customer selects Withdraw From Checking transacton

  4. Customer enters withdrawal amount.

  5. Customer takes cash.

  6. Customer indicates not to continue.

  7. Customer takes card and receipt.

  1. ATM reads cash card and prompts customer to enter PIN.

  2. ATM validates PIN and displays menu with a list of transactions that can be selected.

  3. ATM validates account and prompts customer for withdrawal amount.

  4. ATM validates account balance is greater than, or equal to, withdrawal amount. ATM dispenses cash equal to withdrawal amount and prompts customer to take cash.

  5. ATM asks customer whether they want to continue.

  6. ATM prints receipt, ejects cash, prompts customer to take card, sends debit message to ATM Control System, returns to idle mode and displays main menu.

Exceptions:

 

If ATM cannot read cash card, then ATM ejects cash card.

If incorrect PIN is entered, then customer is given two additional chances to enter correct PIN.

If correct PIN not entered on third try, then ATM keeps cash card and informs customer that they must retrieve card from bank personnel during business hours.

If account is not valid, ATM ejects card and informs customer that they must contact bank personnel during business hours regarding their invalid account.

If account balance is less than withdrawal amount, ATM informs customer that the withdrawal amount exceeds their account balance and to reenter a withdrawal amount that does not exceed account balance. If amount reentered still exceeds account balance, ATM ejects card, informs customer that amount requested still exceeds account balance and bank policy does not permit exceptions.

Alternative Courses:

At any time after reaching the main menu and before finishing a transaction, including before selecting a transaction, the customer may press the cancel key. If the cancel key is pressed, the specified transaction (if there is one) is canceled, the customer’s cash card is returned, the ATM returns to idle mode and the main menu is displayed.

Original Author: Larry Creel

Original Date: 9-25-X

 

Developing Test Cases

A test case is a set of test inputs, execution conditions, and expected results developed for a particular test objective. There should be a one-to-one relationship between use case definitions and test cases. There needs to be at least two test cases for each use case: one for successful execution of the use case and one for an unsuccessful execution of a test case. However, there may be numerous test cases for each use case.

Additional test cases are derived from the exceptions and alternative courses of the use case. Note that additional detail may need to be added to support the actual testing of all the possible scenarios of the use case.

The use case description is the input into Work Paper 12-4. The actor who prepared the use case description also prepares the test case work paper. There will be at least two test conditions for each use case description and normally many more. The actor tries to determine all of the possible scenarios that occur for each use case. Figure 12-4 is an example of a test case work paper designed to test the function “withdrawal from checking from an ATM.” Note that this is Action 3 from Figure 12-3. Work Paper 12-1 becomes the input for Work Paper 12-4, as shown in Figure 12-5.

Table 12-4. Test Case Work Paper

Test Case ID:

 

Original Author:

Last Updated By:

 

Parent Use Case ID:

 

Original Date:

Last Updated On:

 

Test Objective:

     
       

Item No.

Test Condition

Operator Action

Input Specifications

Output Specifications (Expected Results)

Pass or Fail

Comments

       
       
       
       
       
       
       
       
       
       

Table 12-4. Example of completed Work Paper 12-4 (test case work paper) for an ATM withdrawal.

Test Case Worksheet

Test Case ID: T-ATM-01

Original Author: Larry Creel

Last Updated By:

Parent Use Case ID: ATM-01

Original Date: 9-26-XX

Last Updated On:

Test Objective: To test the function Withdraw From Checking, the associated exceptions and alternative courses.

       

ITEM NO.

TEST CONDITION

OPERATOR ACTION

INPUT SPECIFICATIONS

OUTPUT SPECIFICATIONS (EXPECTED RESULTS)

PASS OR FAIL

COMMENTS

1

Successful withdrawal.

1-Insert card.

2-Enter PIN.

3-Select Withdraw From Checking transaction.

4-Enter withdrawal amount.

5-Take cash.

6-Indicate not to continue.

7-Take card and receipt.

1-ATM can read card.

2-Valid account.

3-Valid PIN.

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM validates PIN and displays menu with a list of transactions that can be selected.

3-ATM validates account and prompts customer to enter withdrawal amount.

4-ATM validates account balance greater than, or equal to, withdrawal amount. ATM dispenses cash equal to withdrawal amount and prompts customer to take cash.

5-ATM asks customer whether they want to continue.

6-ATM prints receipt, ejects cash card, prompts customer to take card, sends debit message to ATM Control System. ATM returns to idle mode and displays Main Menu.

 

Re-execute test and use the Continue option

Verify correct debit message received by ATM Control System.

2

Unsuccessful withdrawal due to unreadable card.

1-Insert card.

2-Take card.

1-ATM cannot read card.

2-Valid account.

3-Valid PIN.

4-Account balance greater than or equal to, withdrawal amount.

1-ATM ejects card, prompts customer to take card and displays message “Cash Card unreadable. Please contact bank personnel during business hours.” ATM returns to idle mode and displays Main Menu.

  

3

Unsuccessful withdrawal due to incorrect PIN entered three times.

1-Insert Card.

2-Enter PIN.

3-Reenter PIN.

4-Reenter PIN.

1-ATM can read card.

2-Valid account.

3-Invalid PIN.

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM does not validate PIN and prompts customer to reenter PIN.

3-ATM does not validate PIN and prompts customer to reenter PIN.

4-ATM does not validate PIN, keeps card, displays message “For return of your card, please contact bank personnel during business hours.” ATM returns to idle mode and displays Main Menu.

  

4

Unsuccessful withdrawal due to invalid account.

1-Insert card.

2-Enter PIN.

3-Select Withdrawal transaction.

4-Enter withdrawal amount.

5-Take card.

1-ATM can read card.

2-Invalid account.

3-Valid PIN.

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM validates PIN and displays menu with a list of transactions that can be selected.

3-ATM prompts customer for withdrawal

4-ATM does not validate account, ejects card, prompts customer to take card and displays message “Your account is not valid. Please contact bank personnel during business hours.” ATM returns to idle mode and displays Main Menu.

  

5

Unsuccessful withdrawal due to account balance less than withdrawal amount.

1-Insert card.

2-Enter PIN.

3-Select Withdraw From Checking transaction.

4-Enter withdrawal amount that is greater than account balance.

5-Reenter withdrawal amount that is greater than account balance.

6-Take card.

1-ATM can read card.

2-Valid account

3-Valid PIN.

4-Account balance less than withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM validates PIN and displays menu with a list of transactions that can be selected.

3-ATM prompts customer for withdrawal amount.

4-ATM ejects card and displays message informing customer that the withdrawal amount exceeds their account balance and to reenter a withdrawal amount that does not exceed account balance.

5-ATM ejects card, prompts customer to take card and displays message “Amount requested still exceeds account balance and bank policy does not permit exceptions.” ATM returns to idle mode and displays Main Menu.

  

6

Unsuccessful withdrawal due to customer pressing Cancel key before entering PIN.

1-Insert card.

2-Press Cancel key.

3-Take card.

1-ATM can read card.

2-Valid account.

3-Valid PIN

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM ejects card and prompts customer to take card. ATM returns to idle mode and displays Main Menu.

  

7

Unsuccessful withdrawal due to customer pressing Cancel key after entering PIN.

1-Insert card.

2-Enter PIN.

3-Press Cancel key.

4-Take card.

1-ATM can read card.

2-Valid account.

3-Valid PIN.

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM validates PIN and displays menu with a list of transactions that can be selected.

3-ATM ejects card and prompts customer to take card. ATM returns to idle mode and displays Main Menu.

  

8

Unsuccessful withdrawal; due to customer pressing Cancel key after entering PIN and selecting Withdrawal transaction.

1-Insert card.

2-Enter PIN.

3-Select Withdraw From Checking transaction.

4-Press Cancel key. 5-Take card.

1-ATM can read card.

2-Valid account.

3-Valid PIN.

4-Account balance greater than, or equal to, withdrawal amount.

1-ATM reads card and prompts customer to enter PIN.

2-ATM validates PIN and displays menu with a list of transactions that can be selected.

3-ATM validates account and prompts customer to enter withdrawal amount.

4-ATM ejects card and prompts customer to take card. ATM returns to idle mode and displays Main Menu.

  

Table 12-5. Acceptance criteria.

  

Critical

Test Result

 

No.

Acceptance Requirement

Yes

No

Accept

Reject

Comments

1

The system must execute to end of job during a payroll production run after January 1, 20xx.

X

   

Payroll will not be run in a production status until this requirement has been met.

2

The results of payroll must be correct even if there are date problems in the report or other processing components.

X

   

Payroll will not be run in a production status until this requirement is met.

At the conclusion of acceptance testing, a decision must be made for each acceptance criterion as to whether it has been achieved.

Reaching an Acceptance Decision

Final acceptance of software based on acceptance testing usually means that the software project has been completed, with the exception of any caveats or contingencies. Final acceptance for the software occurs, and the developer has no further development obligations (except, of course, for maintenance, which is a separate issue).

Typical acceptance decisions include the following:

  • Required changes are accepted before progressing to the next activity.

  • Some changes must be made and accepted before further development of that section of the product; other changes may be made and accepted at the next major review.

  • Progress may continue and changes may be accepted at the next review.

  • No changes are required and progress may continue.

The goal is to achieve and accept “perfect” software, but usually some criteria will not be completely satisfied for each product, in which case the user may choose to accept less-than-perfect software.

Task 2: Pre-Operational Testing

If the decision is made to place the software system into an operational state, pre-operational testing should occur. This is testing performed to validate that the developed/acquired system will operate as intended in the production environment. Much of this testing involves ensuring that the configuration management system has, in effect, the right configuration items in the production environment. The testing also includes high-level integration testing which validates whether the tested software system effectively and efficiently integrates with other systems and other parts of the production environment.

The installation phase is the primary responsibility of the IT department. Specifically, computer operations personnel have the responsibility for getting the system into operation. However, the project team and the users may share responsibility for developing the appropriate data files and user and operator instructions for the application system.

As with other aspects of the life cycle, many parties are involved in the installation. Assigning one of those parties to be responsible for the installation pinpoints both accountability and action. The recommended party for that responsibility would be a key individual in computer operations.

However, in some online systems the user operations personnel may have primary operating responsibilities because they initiate work at terminals, and in that instance, it may be more appropriate to assign user operations personnel installation responsibility than to assign responsibility to a centralized operations group.

The installation team performs a standalone, one-time process. This enables them to be independent of the development team so that they can perform their installation tasks concurrently with the development process. This does not prohibit both teams from comprising the same individuals.

Most phases in the systems development life cycle are sequential in nature, and the execution of the installation phase is part of this sequential life cycle process. However, preparing for the installation can overlap with any or all of the previous phases. This installation process may encompass requirements, design, programming, and testing, all of which become the responsibility of the individual in charge of the installation process.

Placing a system under development into an operational status may require a mini-system to handle the process. The installation phase specifications need to be determined and the mechanism developed to install the new system. Programming may be required to convert files from an old format to a new format. Those programs should be tested prior to executing the actual system conversion. However, because this is a one-time process the attention to detail and control exhibited in the system being developed may not exist in the development of the installation system.

Testing New Software Installation

The installation phase poses two difficulties for the test team. First, installation is a process separate from the rest of the application development. Its function relates not to satisfying user needs, but to placing a completed and tested application into production. In many instances, this test will be performed by a different group than the one that tested the other portions of the application system. Second, installation normally occurs in a very short time span. It is not uncommon for an installation to occur within an hour or several hours. Therefore, tests must be well planned and executed if they are to be meaningful and helpful to the installation process.

Test results that are not available until hours or days after the installation are worthless. It is important that the test results be available prior to the completion of the installation. The objective of testing is to determine whether the installation is successful; therefore, the results must be available as quickly as possible. In many instances, this means that the test results must be predetermined before the test starts.

Work Paper 12-5 lists the installation test process. A test program is provided for each of the installation phase concerns. Each test program describes the criteria that should be evaluated through testing and the recommended tests, including suggested test techniques and tools. This generalized installation phase test program may need to be customized for a specific installation. The individual responsible for the test should take into account unique application characteristics that may require special testing.

Table 12-5. Installation Phase Test Process

 

TEST CRITERIA

ASSESSMENT

RECOMMENDED TEST

 

Very Adequate

Adequate

Inadequate

N/A

1.

Have the accuracy and completeness of the installation been verified?

    

Examine the completeness of, and the results from, the installation plan.

2.

Have data changes been prohibited during installation?

    

Compare old and new versions of important data data files.

3.

Has the integrity of the production files been verified?

    

Confirm their integrity with the users of the production files.

4.

Does an audit trail exist showing installation activity?

    

Verify the completeness of the audit trail.

5.

Will the integrity of the previous system/version be maintained until the integrity of the new system/version can be verified?

    

Perform parallel processing.

6.

Ensure that a fail-safe installation plan is used for installation?

    

Determine that the option always exists to revert to the previous system/version.

7.

Ensure that adequate security will occur during installation to prevent compromise?

    

Review the adequacy of the security procedures.

8.

Verify that the defined installation process has been followed?

    

Confirm compliance on a sampling basis.

9.

Verify that the proper system/version is placed into production on the correct date?

    

Determine the adequacy of the version control procedures.

10.

Verify that user personnel can understand and use the documentation provided to use the new system/version?

    

Confirm with users during acceptance testing that their user documentation is adequate.

11.

Verify that all the needed documentation has been prepared in accordance with documentation standards?

    

Verify on a sampling basis that specified documentation exists and meets standards.

12.

Ensure that all involved with the installation are aware of the installation dates and their installation responsibilities?

    

Confirm with a sample of involved parties their knowledge of installation date(s) and responsibilities.

13.

Ensure that the installation performance will be monitored?

    

Examine the monitoring process.

14.

Ensure that the needed operating procedures are complete and installed when needed?

    

Examine the operating procedures and process for placing those procedures into operation.

Testing the Changed Software Version

IT management establishes both the software maintenance changes for its department and the objectives for making the changes. The establishment of clear-cut objectives helps the software maintenance analyst and operation personnel understand some of the procedures they are asked to follow. This understanding often results in a better controlled operation.

The specific objectives of installing the change are as follows:

  • Put changed application systems into production. Each change should be incorporated through a new version of a program. The production system should have the capability to move these versions in and out of production on prescribed dates. To do this, it is necessary first to uniquely identify each version of a program, and second to pinpoint the dates when individual program versions are to be placed into and taken out of production.

  • Assess the efficiency of changes. If a change results in extensive time and effort to do additional checking, or to locate information not provided by the system, additional changes may be desirable.

  • Monitor the correctness of the change. People should not assume that testing will uncover all of the problems. For example, problems may be encountered in untouched parts of the application. People should be assigned the responsibility to review output immediately following changes. If this is a normal function, then those people should be notified that a change has occurred and should be informed where the change is in the system and what potentially bad outputs might be expected.

  • Keep systems library up to date. When programs are added to the production and source library, other versions should be deleted. This will not happen unless specific action is taken. The application system project team should ensure that unwanted versions in the source and object code libraries are deleted when they have fulfilled their purposes.

When the change is put into production, IT management can never be sure what type of problems may be encountered shortly thereafter. The concerns during the change process deal with properly and promptly installing the change. It is during the installation that the results of these change activities become known. Thus, many of the concerns culminate during the installation of the change.

IT management must identify the concerns so that they can establish the proper control mechanisms. The most common concerns during the installation of the change include the following:

  • Will the change be installed on time?

  • Is backup data compatible with the changed system?

  • Are recovery procedures compatible with the changed system?

  • Is the source/object library cluttered with obsolete program versions?

  • Will errors in the change be detected?

  • Will errors in the change be corrected?

Testing the installation of the changes is divided into three tasks, some of which are manual and others heavily automated. Each is explained in detail in the following subsections.

Testing the Adequacy of the Restart/Recovery Plan

Restart and recovery are important stages in application systems processing. Restart means computer operations begin from a point of known integrity. Recovery occurs when the integrity of the system has been compromised. In a recovery process, the systems processing must be backed up to a point of known integrity; thereafter, transactions are rerun to the point at which the problem was detected.

Many aspects of system changes affect the recovery process. Among those to evaluate for their impact on recovery are the following:

  • Addition of a new function

  • Change of job control

  • Additional use of utility programs

  • Change in retention periods

  • Change in computer programs

  • Change in operating documentations

  • Introduction of a new or revised form

The testers should assess each change to determine its impact on the recovery process. If a program is changed, the tester must ensure that those changes are included in backup data. Without the latest version of the program, the tester may not be able to correctly recover computer processing.

If the tester determines that recovery has been affected by the change, that impact on the recovery plan must be updated. The tester can use Work Paper 12-6 to document the restart/recovery planning process, and forward it to the person responsible for recovery.

Table 12-6. Restart/Recovery Planning Data

Field Requirements

 

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application is known.

Ident. Number

The application numerical identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

 

Note: Restart/recovery planning data, necessary to modify the recovery procedures, comprises the remainder of the form.

Impact on Estimated Total Downtime

If the change affects the downtime, the entire recovery process may have to be reevaluated.

Impact on Estimated Downtime Frequency

The number of times the recovery process will probably have to be executed. An important factor in determining backup data and other procedures. If the change will affect the frequency of downtime, the entire recovery process may have to be reevaluated.

Change in Downtime Risk

The probable loss when a system goes down. May be more important than either the total downtime or downtime frequency. If the loss is potentially very high, management must establish strong controls to lessen the downtime risk. If the change will probably cause a loss, the entire recovery process may have to be reevaluated.

New Program Versions for Recovery

Each new program version must be included in the recovery plan. This action documents the needed changes.

New Files/Data for Recovery

Changes in data normally impact the recovery process. This section documents those changes.

New Recovery Instructions/Procedures

If operating procedures or instructions have to be modified, this section provides space to document those changes.

Date New Version Operational

The date the new programs, files, data, recovery instructions, and procedures must be included in the recovery process.

Comments

Any additional information that may be helpful in modifying the recovery program to better reflect the changed application system.

Application System: ___________ Ident. Number: _________ Change Ident. # _________

Impact on Estimated Total Downtime

  

Impact on Estimated Downtime Frequency

  

Change in Downtime Risk

  

New Program Versions for Recovery

  

New Files/Data for Recovery

  

New Recovery Instructions/Procedures

  

Date New Version Operational

  

Comments

  

Verifying the Correct Change Has Been Entered into Production

A positive action must be taken to move a changed program from test status to production status. This action should be taken by the owner of the software. When the user department is satisfied with the change, the new program version can be moved into production.

The production environment should be able to control programs according to production date. Each version of a program in production should be labeled according to when it is to go into and be taken out of production. If there is no known replacement, the date to take that version out of production is the latest date that can be put into that field. When a new version has been selected, that date can be changed to the actual date.

A history of changes should be available for each program, to provide a complete audit trail of everything that has happened to the program since first written. The change history, together with a notification to operations that a change is ready for production, provides the necessary controls during this step.

To verify that the correct change has been placed into production, the tester should answer the following two questions:

1.

Is a change history available?

Changes to an application program should be documented using a work paper similar to Work Paper 12-7. The objective of this history-of-change form is to show all of the changes made to a program since its inception. This serves two purposes: First, if problems occur, this audit trail indicates whether the changes have been made; and second, it discourages unauthorized changes. In most organizations, changes to programs/systems are maintained in source code libraries, test libraries, and production libraries. Work Paper 12-7 is a hardcopy format of the type of information that testers should be looking for in software libraries.

Table 12-7. Program Change History

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application is known.

Ident. Number

The numerical application identifier.

Program Name

A brief description of the program or its name.

Ident. Number

The program identifier.

Coded by

The programmer who originally coded the program.

Maintained by

The programmer who now maintains the program.

Date Entered into Production

The date on which the program was first used in production.

Version #

The original version number.

 

Note: Program change history provides an audit trail of changes to a program; and is contained in the following fields.

Change ID #

The sequence number that uniquely identifies the change.

New Version #

The program version number used to code the change.

Coded by

The name of the programmer who coded the change.

Date Entered into Production

The date on which this version went into production.

Comments

Additional information valuable in tracing the history of a change to a program.

Application System: _________________________ Ident. Number _______________

___________________________________________________________________

Program Name: _______________________ Ident. Number _____________________

____________________________________________________________________

Coded by: _________________________________________________________

____________________________________________________________________

Maintained by: _______________________________________________________

     

Date Entered into Production: __________________ Version # ________________

Program Change History

Change ID #

New Version #

Coded by

Date Entered into Production

Comments

     
     
     
     
     
     
     
     
     
     

2.

Is there a formal notification of production changes?

The procedure to move a version from testing to production should be formalized. Telephone calls and other word-of-mouth procedures are not sufficient. The formal process can be enhanced to prevent the loss of notification forms by using a prenumbered form. The project leader should prepare the notification of production change form, which should then be sent to the computer operations department, which installs the new version. A sample form is illustrated in Work Paper 12-8.

Table 12-8. Production Change Instructions

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Sent To

The name of the person in operations who controls the application system being changed.

Application Control #

A number issued sequentially to control the changes to each application system.

Application Name

The name by which the application is known.

Number

The numerical application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

 

Note: The following production change information includes instructions to computer operations to move programs, job control statements, operator manual procedures, and other items associated with the change to production status. The specific instructions provide both for adding and deleting information.

Resource

The resource that needs to be added to or deleted from the production environment. The most common resources involved in a production change include programs, job statements, and operator manual procedures.

Task

Instructs whether to add or delete the resource from the production status. The Add column indicates that it is to be moved from test status to production status; the Delete column indicates that it is to be removed from production status.

Effective Dates

The date on which the tasks are to be performed.

Comments

Additional instructions that help operations personnel perform their assignments. For example, this column might include the location or the source of new pages for the operator’s manual.

Prepared By

Usually, the name of the project leader.

Date

The date on which the form was prepared.

Sent To: _____________________ Application Control #: _________________

Application Name _________ Number: _______ Change Ident. #: ________

Production Change Instructions

Resource

Task

Effective Dates

Comments

Add

Delete

Program #

    

Program #

    

Program #

    

Program #

    

Job Statements #

    

Job Statements #

    

Operator Manual procedure #

    

Operator Manual procedure #

    

Other: ________________

    

Other: ________________

    

Other: ________________

    

Prepared By: ________________________ Date: __________________

The owner of the software decides when a new version of the software will be placed into production. This approval gives operations the go-ahead to initiate its procedures for notifying the appropriate staff that changes are to be installed. The tester must verify that the appropriate notification has been given, pending the owner’s approval, and that the information is correct.

Verifying Unneeded Versions Have Been Deleted

It may or may not be desirable to delete old versions of programs when a new version is entered. The most obvious argument against doing so is to maintain a fallback version in case the new version proves defective. Organizations should establish standards regarding when old versions should be automatically deleted from the library. Some, while not automating this function, periodically notify the project team that older versions will be deleted unless the project team takes specific action to have them retained in the library. Other organizations charge the projects a fee for retaining old versions.

In any case, programs should not be deleted from libraries without authorization. Some type of form should be prepared to authorize computer operations personnel to delete programs from a library. This form also provides a history of additions to the libraries. A source/object library deletions notice form is illustrated in Work Paper 12-9. This form becomes a more effective control if a sequential number is added, so that its loss is more likely to be detected. The form should be filled out by the software maintenance project leader and sent to computer operations for action.

Table 12-9. Deletion Instructions

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Ident. Number

The numerical application identifier.

Deletion Control Number

A number sequentially issued to control the form.

Sent To

Typically, the person in operations responsible for deleting a program from the application.

Date

The date on which the form was prepared.

From

Usually, the name of the project leader.

Department

The organization or department authorizing the deletion of the program.

 

Note: Deletion instructions guide operations personnel to delete unwanted program versions, as follows:

Library

The name or number that identifies the library in which the program resides.

Program Version to Delete

The program number and version of that program that is to be deleted.

Deletion Date

The date on which the program version may be deleted.

Comments

Any additional information helpful to operations staff in performing the required tasks.

Prepared By

The name of the person who prepared the form.

Date

The date on which the form was prepared.

Application Name: ________ Ident. Number: _______ Deletion Control #: _______

Sent To: __________________________________ Date: ___________________

From: _____________________________________ Department: _____________

 

Deletion Instructions

 

Library

Program Version to Delete

Deletion Date

Comments

    
    
    
    
    
    
    
    
    
    

Prepared By: _______________________________ Date: _________________

The computer operations department should have a process for deleting unneeded versions of source libraries, test libraries, and production libraries—after receiving authorization to do so, of course. It is recommended that those authorizations be in writing from the owner of the item. The type of information needed for deleting programs from a library is contained in Work Paper 12-9, which also contains instructions for deleting programs.

The objective of the entire correction process is to satisfy the new date need. This is accomplished by incorporating that need into the application system and running it in production status. If all parts of the software change process have been properly performed, the production step is mechanical. The program library automatically calls in the correct version of the program on the proper day. However, if there are special operator instructions, the operator should be alerted to that change on the appropriate day. Most information services organizations have procedures for this purpose.

Monitoring Production

Application systems are most vulnerable to problems immediately following the introduction of new versions of a program(s). For this reason, many organizations monitor the output immediately following the introduction of a new program version. In organizations that normally monitor output, extra effort or attention may be applied at the time a changed program version is first run.

The following groups may monitor the output of a new program version:

  • Application system control group

  • User personnel

  • Software maintenance personnel

  • Computer operations personnel

Regardless of who monitors the output, the software maintenance analyst and user personnel should provide clues about what to look for. User and software maintenance personnel must attempt to identify the specific areas where they believe problems might occur.

The types of clues that could be provided to monitoring personnel include the following:

  • Transactions to investigate. Specific types of transactions, such as certain product numbers, that they should monitor

  • Customers. Specific customers or other identifiers to help them locate problems on specific pages of reports

  • Reports. Specific outputs that should be reviewed

  • Tape files. Data records or files that have been changed that they may need to examine by extracting information to determine if data was properly recorded

  • Performance. Anticipated improvements in the effectiveness, efficiency, and economy of operations that they should review

This process is normally more effective if it is formalized. This means documenting the type of clues to look for during the monitoring process. A program change monitor notification form is illustrated in Work Paper 12-10. This form should be completed by the user and/or software maintenance personnel and then given to the people monitoring the transaction. The information contained on the program change monitor notification form is outlined on the form’s completion instructions sheet.

Table 12-10. Form Completion Instructions: Program Change Monitor Notification

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Description of Change

A description which helps the people monitoring the application gain perspective on the areas impacted.

Date of Change

The date on which the change goes into production. This is the date when the monitoring should commence.

Monitoring Guidelines

The description of the type of problems to be anticipated. The information should be descriptive enough to tell the monitors both what to look for and what action to take if they find problems. Obviously, those potential problems which are identified are those most likely to occur. However, the monitors should be alert to any type of problem that might occur immediately following introduction of a new program version. The information about the high-probability items is:

  • Area potentially impacted: the report, transactions, or other area in which the individuals monitoring should be looking.

  • Probable impact: this section describes the type of problems that are most likely to occur within the impacted area.

  • Action to take if problem occurs: the people to call, correction to make, or any other action that the individual uncovering the problem should take.

  • Comments: any additional information that might prove helpful to the monitors in attempting to identify problems associated with the program change.

Prepared By

The name of the person who prepared the form, normally the software maintenance analyst.

Date

The date on which the form was prepared.

Application System: _____________ Number: ___________ Change Ident. # ___________

Description of Change

 

Date of Change

  

_____________

    
    
 

Monitoring Guidelines

 

Area Potentially Impacted

Probable Impact

Action to Take If Problem Occurs

Comments

    
    
    
    
    
    
    
    
    
    

Prepared By: _____________________________________ Date: ___________________

Documenting Problems

Individuals detecting problems when they monitor changes in application systems should formally document them. The formal documentation process can be made even more effective if the forms are controlled through a numbering sequence. This enables software maintenance personnel to detect lost problem forms. The individual monitoring the process should keep a duplicate copy of the form on hand, in case the copy sent to the project is lost.

The person monitoring the process should be asked both to document the problem and to assess the risk associated with that problem. Although this individual may not be the ideal candidate to make a risk assessment, a preliminary assessment is often very helpful in determining the seriousness of the problem. If the initial estimate about the risk is erroneous, it can be corrected at a later time.

The report of a system problem caused by system change, because of the program change monitor notification, enables the individual to associate the problem with a specific problem change. This additional piece of information is usually invaluable in correcting the problem.

A form to record a system problem caused by a system change is illustrated in Work Paper 12-11. This form should be completed by the individual monitoring the application system. The completed form should be given to the software maintenance analyst for correction. The information contained on the system problem caused by system change form is described on the form’s completion instructions sheet.

Table 12-11. Form Completion Instructions: System Problem Caused by System Change

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Problem Date

The date on which the problem was located.

Problem Time

The time the problem was encountered.

Problem Control #

A sequential number that controls the form.

Description of Problem

A brief narrative description. Normally, examples of the problem are attached to the form.

Area of Application Affected

This segment is designed to help the software maintenance analyst identify the source of the problem. If it is one of the problems outlined on the program change monitor notification form, the individual completing the form can be very specific regarding the affected area. Otherwise, the individual should attempt to identify areas such as report writing or input validation where the problem seems to originate.

Impact of Problem

The individual identifying the problem should attempt to assess the impact of that problem on the organization. This information is very valuable in determining how fast the problem must be fixed. Ideally, this risk would be expressed in quantitative units, such as number of invoices incorrectly processed, dollar loss, number of hours lost because of the problems. It is often helpful to divide the problem into various time periods. This is because some risks are not immediately serious but become serious if they are not corrected by a certain time or date. Some suggested time spans included on the form are:

  • If not fixed within one hour

  • If not fixed within one day

  • If not fixed within one week

Recommendation

The suggestions from the individual uncovering the problem as to what should be done to fix it. This recommendation can either be to correct the errors that have occurred and/or to correct the problems in the application system.

Prepared By

The name of the person who uncovered the system problem caused by the system change.

Date

The date on which the form was prepared.

Application Name: __________ Number: ________ Change Ident. # _________

Problem Date _________ Problem Time _______ Problem Control # _______

Description of Problem

  
  

Area of Application Affected

  
  

Impact of Problem

If not fixed within 1 hour:

  
  

If not fixed within 1 day:

  
  

If not fixed within 1 week:

  
  

Recommendation

  
  

Prepared By: ________________________________ Date: _______________

Task 3: Post-Operational Testing

Post-operational testing is used in this book to signify testing changed versions of the software system. The process as presented is equally applicable to testing changed versions during development, as well as changed versions after the system has been placed into an operational state. If the IT organization has well-developed change management, version control, and an effective configuration management system, the extensiveness of testing new versions will be significantly reduced. In those instances, much of the testing from the versions will center on the specific change made to the software system.

Testing and training are as important to software maintenance as they are to new systems development. Frequently, even small changes require extensive testing and training. It is not unusual to spend more time testing a change and training users to operate a new facility than incorporating the change into the application system. This task explains the process that should be performed when testing system changes.

Too frequently, software maintenance has been synonymous with “quick and dirty” programming, which is rarely worth the risk. Frequently, it takes considerable time to correct problems that could have been prevented by adequate testing and training. If testing is properly conducted, it should not take longer to do the job right.

IT management has the responsibility for establishing the testing and training procedures for software changes. Many organizations establish change control procedures but do not carry them through testing and training. A checklist is provided for management to review the effectiveness of their testing.

The process outlined in this task is designed to be used two ways. First, it is written as if changes occur after the software has been placed into production. The second and perhaps equally important use will be testing changes during the development of software.

Both of these uses of the process for testing changes require some reiteration of previous steps. For example, the test plan will need to be updated, and the test data will need to be updated. Because those activities are presented in previous chapters, they are not reiterated in this task.

The following five tasks should be performed to effectively test a changed version of software.

Developing and Updating the Test Plan

The test plan for software maintenance is a shorter, more directed version of a test plan used for a new application system. Whereas new application testing can take many weeks or months, software maintenance testing often must be done within a single day or a few hours. Because of time constraints, many of the steps that might be performed individually in a new system are combined or condensed into a short time span. This increases the need for planning so that all aspects of the test can be executed within the allotted time.

The types of testing will vary based upon the implemented change. For example, if a report is modified, there is little need to test recovery and backup plans. On the other hand, if new files are created or processing procedures changed, restart and recovery should be tested.

The preparation of a test plan is a two-part process. The first part is the determination of what types of tests should be conducted, and the second part is the plan for how to conduct them. Both parts are important in software maintenance testing.

Elements to be tested (types of testing) are as follows:

  • Changed transactions

  • Changed programs

  • Operating procedures

  • Control group procedures

  • User procedures

  • Intersystem connections

  • Job control language

  • Interface to systems software

  • Execution of interface to software systems

  • Security

  • Backup/recovery procedures

The test plan should list the testing objective, the method of testing, and the desired result. In addition, regression testing might be used to verify that unchanged segments have not been unintentionally altered. Intersystem connections should be tested to ensure that all systems are properly modified to handle the change.

An acceptance test plan is included as Work Paper 12-12. This work paper should be completed by the software maintenance analyst and countersigned by the individual responsible for accepting the changed system.

Table 12-12. Form Completion Instructions: Acceptance Test Plan

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Individual Responsible

The name of the individual or individuals who will be conducting the for Test test. This normally is the user and the application systems analyst/programmer.

Test Plan

The steps that need to be followed in conducting the test. For the functional, regression, stress, and performance types of testing, these test characteristics need to be defined:

  • Change objective: the description of the objective of the change that was installed. This should be specific so that test planning can be based on the characteristics of the objective.

  • Method of testing: the type of test that will be conducted to verify that the objective is achieved.

  • Desired result: the expected result from conducting the test. If this result is achieved, the implementation can be considered successful, while failure to meet this result means an unsuccessful implementation.

Regression Test Plan

The tests and procedures to be followed to ensure that unchanged segments of the application system have not been inadvertently changed by software maintenance.

Intersystem Test Plan

The tests to be conducted to ensure that data flowing from and to other systems will be correctly handled after the change.

Comments

Additional information that might prove helpful in conducting or verifying the test results.

Individual Who Accepts Tested Application

The name of the individual who should review this test plan because of the responsibility to accept the change after successful testing.

Date

The date on which the form was completed.

Application Name: ________________ Number: _____________ Change Ident. # _____________

______________________________________________________________________________

Individual Responsible for Test: ____________________________________________________

   

TEST PLAN

Change Objective

Method of Testing

Desired Results

   
   
   
   
   

Regresssion Test Plan

  
   
   
   

Intersystem Test Plan

  
   
   
   

Comments

   
   

Individual Who Accepts Tested Application

Date

_______________________________________________

____________________

Developing and Updating the Test Data

Data must be prepared for testing all the areas changed during a software maintenance process. For many applications, the existing test data will be sufficient to test the new change. However, in many situations new test data will need to be prepared.

In some cases, the preparation of test data can be significantly different for software maintenance than for new systems. For example, when the system is operational it may be possible to test the application in a live operational mode, thus eliminating the need for technical test data, and enabling maintenance software analysts to use the same input the users of the application prepare. Special accounts can be established to accumulate test data processed during testing in a production mode. The information in these accounts can then be eliminated after the test, which negates the effect of entering test data into a production environment.

It is important to test both what should be accomplished, as well as what can go wrong. Most tests do a good job of verifying that the specifications have been implemented properly. Where testing frequently is inadequate is in verifying the unanticipated conditions. Included in this category are the following:

  • Transactions with erroneous data

  • Unauthorized transactions

  • Transactions entered too early

  • Transactions entered too late

  • Transactions that do not correspond with master data contained in the application

  • Grossly erroneous transactions, such as transactions that do not belong to the application being tested

  • Transactions with larger values in the fields than anticipated

These types of transactions can be designed by doing a simple risk analysis scenario. The risk analysis scenario involves brainstorming with key people involved in the application, such as users, maintenance systems analysts, and auditors. These people attempt to ask all the questions, such as, “What if this type of error were entered? What would happen if too large a value were entered in this field?”

The three methods that can be used to develop/update test data are as follows:

  • Update existing test data. If test files have been created for a previous version, they can be used for testing a change. However, the test data will need to be updated to reflect the changes to the software. Note that testers may wish to use both versions in conducting testing. Version 1 is to test that the unchanged portions perform now as they did in the previous versions. The new version is to test the changes. Updating the test data should follow the same processes used in creating new test data.

  • Create new test data. The creation of new test data for maintenance follows the same methods as creating test data for a new software system.

  • Use production data for testing. Tests are performed using some or all of the production data for test purposes (date-modified, of course), particularly when there are no function changes. Using production data for test purposes may result in the following impediments to effective testing:

    • Missing test transactions. The transaction types on a production data file may be limited. For example, if the tester wants to test an override of a standard price, that transaction may not occur on the production data file.

    • Multiple tests of the same transaction. Production data usually represents the production environment, in which 80 to 90 percent of the transactions are of approximately the same type. This means that some transaction types are not tested at all, while others are tested hundreds of times.

    • Unknown test results. An important part of testing is to validate that correct results are produced. When testers create test transactions, they have control over the expected results. When production data is used, however, testers must manually calculate the correct processing results, perhaps causing them to misinterpret the intent of the transaction and thereby to misinterpret the results.

    • Lack of ownership. Production data is owned by the production area, whereas test data created by testers is owned by the testers. Some testers are more involved and interested in test data they created themselves than in test data borrowed from another owner.

Although these potential impediments might cause production data testing to be ineffective, steps can be taken to improve its usefulness. Production data should not be completely excluded as a source of test data.

Testing the Control Change Process

Listed next are three tasks commonly used to control and record changes. If the staff performing the corrections does not have such a process, the testers can give them these subtasks and then request the work papers when complete. Testers should verify completeness using these three tasks as a guide.

Identifying and Controlling Change

An important aspect of changing a system is identifying which parts of the system will be affected by that change. The impact may be in any part of the application system, both manual and computerized, as well as in the supporting software system. Regardless of whether affected areas will require changes, at a minimum there should be an investigation into the extent of the impact.

The types of analytical action helpful in determining the parts affected include the following:

  • Review system documentation.

  • Review program documentation.

  • Review undocumented changes.

  • Interview user personnel regarding procedures.

  • Interview operations personnel regarding procedures.

  • Interview job control coordinator regarding changes.

  • Interview systems support personnel if the implementation may require deviations from standards and/or IT departmental procedures.

This is a very important step in the systems change process, as it controls the change through a change identification number and through change documentation. The time and effort spent executing this step is usually returned in the form of more effective implementation procedures and fewer problems during and after the implementation of the change. A change control form is presented as Work Paper 12-13.

Table 12-13. Change Control Form

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application system is known.

Application Ident. #

The identification number of the application system.

Change Ident. #

The control number for the change.

Description of Change

The solution and general terms for the change, such as issue a new report, add an input data edit, or utilize a new processing routine.

Changes Required

All impacted areas with instructions for the changes to be made or investigations to be undertaken regarding the impact of the proposed solution. The type of items affected include:

  • data elements

  • programs

  • job control language

  • operations manuals

  • user training

  • user manuals

 

For each of the affected items, the following information should be provided:

  • Item affected: the program, data element, job control or other

  • Item identification: the program number or other method of identifying the affected item

Prepared By

The name of the person completing the form.

Date

The date on which the form was completed.

Application System: _________ Application Ident. #: ________ Change Ident. # ________

Description of Change:

   

Change Overview:

   

Changes Required

Item

Item Identification

Comments

   
   
   

Prepared By: _________________________ Date: ______________________________

Documenting Change Needed on Each Data Element

Whereas changes in processing normally affect only a single program or a small number of interrelated programs, changes to data may affect many applications. Thus, changes that affect data may have a more significant effect on the organization than those that affect processing.

Changes can affect data in any of the following ways:

  • Length. The data element may be lengthened or shortened.

  • Value. The value or codes used in data elements may be expanded, modified, or reduced.

  • Consistency. The value contained in data elements may not be the same in various applications or databases; thus, it is necessary to improve consistency.

  • Reliability. The accuracy of the data may be changed.

In addition, changes to a data element may require further documentation. Organizations in a database environment need to expend additional effort to ensure that data documentation is consistent, reliable, and understandable. Much of this effort will be translated into data documentation.

A form for documenting data changes is presented as Work Paper 12-14. This form should be used to provide an overview of the data change. In a database environment, a copy of the data definition form should be attached to the data change form as a control vehicle.

Table 12-14. Data Change Form

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application is known.

Application Ident. #

The number used to identify the application system.

Change Ident. #

The sequential number used to identify the change.

Data Element Name

The name by which the data element is known.

Data Ident. #

The number used to uniquely identify the data element. In a data dictionary system, this should be the data dictionary data element number.

Record Name

The record or records in which the data element is contained.

Record Ident. #

The number that describes the record or records in which the data element is contained.

File Name

The file or files in which the data element is contained.

File Ident. #

The numbers that uniquely describe the file or files in which the data element is contained.

Assigned To

The name of the person, function, or department responsible for making the change to the data element and the associated records and files.

Date Required

The date by which the change should be made (pending user approval).

Data Change

The type of change to be made on the data element.

Description of Change

A detailed narrative description (with examples when applicable) explaining the type of change that must be made to the data element. When a data dictionary is used, the data dictionary form should be attached to the data change form.

Comments

Information helpful in implementing the data change.

Prepared By

The name of the person who completed the form.

Date

The date on which the form was completed.

Application System: ________ Application Ident. #: ________ Change Ident. #: ________

Data Element Name: _________________________ Data Ident. #: _________________

Record Name: ______________________________ Record Ident. #: ______________

File Name: ________________________________ File Ident. #: __________________

Assigned To: ___________________________________ Date Required: ___________

Data Change

□ Add element.

□ Delete element.

□ Modify element attributes.

□ Modify element description.

Description of Change

  
  
  
  
  

Comments

  
  
  
  
  

Prepared By: _________________________________ Date: _____________________

Documenting Changes Needed in Each Program

The implementation of most changes will require some programming alterations. Even a change of data attributes often necessitates program changes. Some of these will be minor in nature, whereas others may be extremely difficult and time-consuming to implement.

The change required for each program should be documented on a separate form. This serves several purposes: First, it provides detailed instructions at the individual program level regarding what is required to change the program; second, it helps ensure that changes will be made and not lost—it is difficult to overlook a change that is formally requested; third, and equally important, it provides a detailed audit trail of changes, in the event problems occur.

Work Paper 12-15 is a form for documenting program changes. It should be completed even though doing so may require more time than the implementation of the change itself. The merits of good change documentation have been repeatedly established.

Table 12-15. Program Change Form

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application System

The name by which the application to be changed is known.

Application Ident. #

The identifier that uniquely describes the application system.

Change Ident. #

The sequential number used to identify the change.

Program Name

The name by which the program to be changed is known.

Number

The number that uniquely identifies the program.

Version Number

The version number that will be assigned to the altered program.

Date Required

The date on which the change is to be implemented, assuming the user approves the changes.

Assigned To

The name of the person who will make the change in the program.

Description of Change

A narrative description of the change to be made to this specific program. It should provide examples of programs produced before and after the change.

Source Statement Affected

A description of the source statement or statements that should be changed, together with the change to be made. The change may be described in terms of specifications rather than specific source statements.

Comments

Tips and techniques on how best to install the change in the application system.

Prepared By

The name of the person who completed the form.

Date

The date on which the form was completed.

  

Application System:_________ Application Ident. #: _______ Change Ident. #: _______

Program Name: _______________ Number: _______________ Version #: __________

New Version #: __________ Date Required: ___________ Assigned To: ___________

Description of Change

  
  

Source Statement Affected

  
  

Comments

  
  

Prepared By: ___________________________________ Date: __________________

Conducting Testing

Software change testing is normally conducted by both the user and software maintenance test team. The testing is designed to provide the user assurance that the change has been properly implemented. Another role of the software maintenance test team is to aid the user in conducting and evaluating the test.

Testing for software maintenance is normally not extensive. In an online environment, the features would be installed and the user would test them in a regular production environment. In a batch environment, special computer runs must be set up to run the acceptance testing. (Because of the cost, these runs are sometimes eliminated.)

An effective method for conducting software maintenance testing is to prepare a checklist providing both the administrative and technical data needed to conduct the test. This ensures that everything is ready at the time the test is to be conducted. A checklist for conducting a software maintenance acceptance test is illustrated in Work Paper 12-16. This form should be prepared by the software maintenance analyst as an aid in helping the user conduct the test. The information contained on the conduct acceptance test checklist is described on the form’s completion instructions sheet.

Table 12-16. Form Completion Instructions: Acceptance Test Checklist

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Administrative Data

The administrative data relates to the management of the test.

Technical Data

The resources needed to conduct the acceptance test and the location of those resources. The information that should be documented about the needed resources includes:

  • Resource needed: the exact resource needed.

  • Location: the physical location of that resource. In many acceptance tests, the resources are marshalled in a common area to await conducting the test.

Application Name: __________ Number: ________ Change Ident. # ________

  

Administrative Data

Date of test

_____________________________________

Location of test

_____________________________________

Time of test

_____________________________________

Information services person in charge of test

_____________________________________

User person in charge of test

_____________________________________

Computer time available

_____________________________________

Technical Data

 

Resource Needed

Location

Available

Yes

No

N/A

1.

Test transactions

    

2.

Master files/data base

    

3.

Operator instructions

    

4.

Special media/forms

    

5.

Acceptance criteria

    

6.

Input support personnel

    

7.

Output support personnel

    

8.

Control group

    

9.

External control proof

    

10.

Backup/recovery plan

    

11.

Security plan

    

12.

Error message actions

    

Prepared By: ________________________________ Date: _______________

Developing and Updating Training Material

Updating training material for users, and training users, is not an integral part of many software change processes. Therefore, this task description describes a process for updating training material and performing that training. Where training is not part of software maintenance, the testers can give the software maintenance analyst these materials to use in developing training materials. If training is an integral part of the software maintenance process, the testers can use the material in this task as a guide for evaluating the completion of updating training materials.

Training is an often-overlooked aspect of software maintenance. Many of the changes are small; this fosters the belief that training is not needed. Also, the fact that many changes originate in the user area leads the software maintenance analyst to the conclusion that the users already know what they want and have trained their staff accordingly. All these assumptions may be wrong.

The software maintenance analyst should evaluate each change for its impact on the procedures performed by people. If the change affects those procedures, then training material should be prepared. However, changes that increase performance and have no impact on users of the system do not require training unless they affect the operation of the system. In that case, computer operations personnel would need training. Training cannot be designed by someone who is unfamiliar with existing training material. The software maintenance change is incorporated into the application system. The training requirements are likewise incorporated into existing training material. Therefore, it behooves the application project personnel to maintain an inventory of training material.

Training Material Inventory Form

Most application systems have limited training materials. The more common types of training materials include the following:

  • Orientation to the project narrative

  • User manuals

  • Illustrations of completed forms and instructions for completing them

  • Explanation and action to take on error listings

  • Explanation of reports and how to use them

  • Explanation of input data and how to enter it

A form for inventory training material is included as Work Paper 12-17. This form should be completed and filed with the software maintenance analyst. Whenever a change is made, the form can be duplicated, and at that point the “needs updating” column can be completed to indicate whether training material must be changed as a result of incorporating the maintenance need. The columns to be completed on the form are explained on the form’s completion instructions sheet.

Table 12-17. Form Completion Instructions: Training Material Inventory

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Training Material Name

The name or number by which the training material is known.

Training Material Description

A brief narrative description of what is contained in the training material.

Needs Updating

Columns to be completed whenever a change is installed. The columns provide an indication of whether the training material needs updating (Yes column) or does not need updating (No column).

Prepared By

The name of the individual responsible for maintaining the inventory.

Date

The last date on which the inventory was updated.

Application Name: ________ Number: __________ Change Ident. # _________

Training Material Name/Number

Training Material Description

Needs Updating

Yes

No

    
    
    
    
    
    
    
    
    
    

Prepare By: _____________________________ Date: ________________

Training Plan Work Paper

The training plan work paper is a why, who, what, where, when, and how approach to training. The individual developing the plan must answer those questions about each change to determine the scope of training programs. Points to ponder in developing training programs are as follows:

  • Why conduct training? Do the changes incorporated into the application system necessitate training people?

  • Who should be trained? If training is needed, then it must be determined which individuals, categories of people, or departments require that training.

  • What training is required? The training plan must determine the content of the necessary training material.

  • Where should training be given? The location of the training session, or dissemination of the training material, can affect how and when the material is presented.

  • When should training be given? Confusion might ensue if people are trained too far in advance of the implementation of new procedures. For example, even training provided a few days prior to the change may cause confusion because people might be uncertain as to whether to follow the new or the old procedure. In addition, it may be necessary to conduct training both immediately before and immediately after the change to reinforce the new procedures and to answer questions immediately after the new procedures are installed.

  • How should the training material be designed? The objective of training is to provide people with the tools and procedures necessary to do their job. The type of change will frequently determine the type of training material to be developed.

  • What are the expected training results? The developers of the training plan should have in mind the behavior changes or skills to be obtained through the training sessions. They should also determine whether training is effective.

Work Paper 12-18 documents the training plan by providing space to indicate the preceding types of information. In addition, the responsible individual and the dates needed for training can also be documented on the form. The information contained on the training plan work paper is described on the form’s completion instructions sheet.

Table 12-18. Form Completion Instructions: Training Plan

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Individual Responsible for Training

The individual with the overall responsibility for ensuring that all the training material is prepared, taught, and evaluated prior to the implementation of the change.

Training Plan

The details of why, who, what, where, when, how, and the results to be derived from the training plan. The remainder of the form deals with this plan.

Group Needing Training

The name of the individual, type of person, or department requiring training. The groups to consider include:

  • Transaction origination staff: the people who originate data into the application system.

  • Data entry clerk: the person who transcribes data to computer media.

  • Control group—information services: the group responsible for ensuring that all input is received and that output is reasonable.

  • Control group—user: the group in the user area responsible for the accuracy, completeness, and authorization of data.

  • Computer operations: the group responsible for running the application on computer hardware.

  • Records retention: the group or groups responsible for saving backup data.

  • Third-party customers: people with unsatisfied needs or people who are the ultimate recipients of reports.

  • User management and staff: the group responsible for the application.

  • Other: any other involved party requiring training.

Training Approach

The why, what, where, when, and how of the training plan.

Desired Results

The expected result, behavior change, or skills to be gained from the training material.

Training Dates

Important dates for implementing the training plan.

Comments

Any material helpful in designing, teaching, or evaluating the training material.

Individual Who Accepts Training as Sufficient

The name of the individual or department who must agree that the training is adequate. This individual should also concur with the training plan.

Date

The date the training plan was developed.

Application Name: ________ Number: __________ Change Ident. # _________

Individual Responsible for Training ____________________________________

  

Training Plan

 

Group Needing Training

Training Approach

Desired Result

1.

Transaction origination staff

  

2.

Data entry clerk

  

3.

Control group—information services

  

4.

Control group—user

  

5.

Computer operations

  

6.

Records retention

  

7.

Third-party customers

  

8.

User management and staff

  

9.

Other: ____________

  

10.

Other: ____________

  

Training Dates

Date training material prepared

____________________

Date training can commence

____________________

Date training to be completed

____________________

Comments

    
    
    

Individual Who Accepts Testing as Sufficient

Date

__________________________________

________________________

Preparing Training Material

The tasks required to perform this step are similar to those used in making a change to an application system. In most instances, training material will exist, but will need to be modified because of the change. Changes in the program must be accompanied by changes in the training material. Individuals responsible for modifying training should consider the following tasks:

  • Identifying the impact of the change on people

  • Determining what type of training must be “unlearned” (people must be stopped from doing certain tasks)

  • Determining whether “unlearning” is included in the training material

  • Making plans to delete outmoded training material

  • Determining what new learning is needed (this should come from the training plan)

  • Determining where in the training material that new learning should be inserted

  • Preparing the training material that will teach people the new skills (this should be specified in the training plan)

  • Designing that material

  • Determining the best method of training (this should be documented in the training plan)

  • Developing procedures so that the new training material will be incorporated into the existing training material on the right date, and that other supportive training will occur at the proper time

An inventory should be maintained of the new/modified training modules. This is in addition to the training material inventory, which is in hardcopy. The training modules are designed to be supportive of that training material. This helps determine what modules need to be altered to achieve the behavior changes/new skills required because of the change.

Work Paper 12-19 is a training module inventory form. This should be completed by the individual responsible for training. The information contained on the form is described on the form’s completion instructions, and both are found at the end of the chapter.

Table 12-19. Form Completion Instructions: New/Modified Training Modules

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Training Module Inventory

The remainder of the information on the form describes the modules.

Training Module Description

A brief narrative of the training module. The location of the training material should be identified so that it can be easily obtained.

Description of Change

As the training module becomes modified, this column should contain a sequential listing of all the changes made. In effect, it is a change history for the training module.

Training Material

The course material included in the training module.

Who Should Be Trained

The individual(s) to whom the training module is directed.

Method of Training

The recommended way in which the training module should be used.

Prepared By

The name of the individual who prepared the module.

Date

The date on which it was last updated.

Application Name: _____________ Number: _____________ Change Ident. # ___________

   

Training Module Inventory

   

Training Module Description

Description of Change

Training Material

Who Should Be Trained

Method of Training

Meeting

Classroom

Self-study

New Procedure

Supervisor

Other

          
          
          
          
          
          

Prepared By: _____________________________________ Date: _________________

Conducting Training

The training task is primarily one of coordination in that it must ensure that everything needed for training has been prepared. The coordination normally involves these steps:

  1. Schedule training dates.

  2. Notify the people who should attend.

  3. Obtain training facilities.

  4. Obtain instructors.

  5. Reproduce the material in sufficient quantity for all those requiring the material.

  6. Train instructors.

  7. Set up the classroom or meeting room.

Many times, training will be provided through manuals or special material delivered to the involved parties. The type of training should be determined when the training plan is developed and the material is prepared.

A training checklist should be prepared. A sample checklist for conducting training is illustrated in Work Paper 12-20. The individual responsible for training should prepare this checklist for use during the training period to ensure all the needed training is provided. The information included on the conduct training checklist is described on the form’s completion instructions sheet. Both forms are found at the end of the chapter.

Table 12-20. Form Completion Instructions: Conduct Training Checklist

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Training Checklist

The remainder of the form contains the checklist information, which is:

  • Name of individual requiring training: whenever possible, actual names should be used, as opposed to groups of people, so records can be maintained as to whether or not the people actually took the training.

  • Department: the department/organization with which the individual is affiliated.

  • Training required: the training modules and/or material to be given the individual.

  • Dates: the dates on which the course is to be given or the training material to be disseminated to the individual. The schedules dates should be listed, as well as the date the individual actually took the course or received the material.

  • Location: the location of the course or the location to which the training material should be distributed.

  • Instructor: the name of the responsible individual should be listed.

  • Comments: any other information that would verify that training took place. In classroom situations where examinations are given, the space could be used to record that grade.

Prepared By

The name of the individual preparing the form who should be the one responsible for ensuring the training is given.

Date

The date on which the form was prepared.

Application Name: ____________ Number: _____________ Change Ident. # ____________

  

Training Checklist

  
   

Dates

   

Name of Individual Requiring Training

Department

Training Required

Scheduled

Taken

Location

Instructor

Comments

        
        
        
        
        
        
        
        

Prepared By: __________________________________ Date: __________________

Check Procedures

Work Paper 12-21 is a quality control checklist for Task 1, Work Paper 12-22 is a quality control checklist for Task 2, and Work Paper 12-23 is a quality control checklist for Task 3.

Table 12-21. Acceptance Testing Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Has acceptance testing been incorporated into the test plan?

    

2.

Is acceptance testing viewed as a project process, rather than as a single step at the end of testing?

    

3.

Have the appropriate users of the software or hardware components been selected to develop the acceptance criteria for those components?

    

4.

Does the group that defines the acceptance criteria represent all uses of the component to be tested?

    

5.

Do those individuals accept the responsibility of identifying acceptance criteria?

    

6.

Have the acceptance criteria been identified early enough in the project so that they can influence planning and implementation?

    

7.

Has an acceptance test plan been developed?

    

8.

Does that plan include the components of acceptance test plan as outlined in this chapter?

    

9.

Is the acceptance test plan consistent with the acceptance criteria?

    

10.

Have appropriate interim products been reviewed by the acceptance testers before being used for the next implementation task?

    

11.

Have the appropriate testing techniques been selected for acceptance testing?

    

12.

Do the acceptance testers have the skill sets necessary to perform acceptance testing?

    

13.

Have adequate resources for performing acceptance testing been allocated?

    

14.

Has adequate time to perform acceptance testing been allocated?

    

15.

Have interim acceptance opinions been issued?

    

16.

Has the project team reacted positively to the acceptance testers’ concerns?

    

17.

Has a final acceptance decision been made?

    

18.

Is that decision consistent with the acceptance criteria that have been met and not met?

    

19.

Have the critical acceptance criteria been identified?

    

20.

Are the requirements documented in enough detail that the software interfaces can be determined?

    

21.

Does both user management and customer management support use case testing?

    

22.

Has a system boundary diagram been prepared for the software being tested?

    

23.

Does the system boundary diagram identify all of the interfaces?

    

24.

Have the individuals responsible for each interface on the new system boundary diagram been identified?

    

25.

Do the actors agree to participate in developing use cases?

    

26.

Has a use case been defined for each system boundary?

    

27.

Do the users of the software agree that the use case definitions are complete?

    

28.

Have at least two test cases been prepared for each use case?

    

29.

Have both a successful and unsuccessful test condition been identified for each use case?

    

30.

Do the users of the software agree that the test case work paper identifies all of the probable scenarios?

    

Table 12-22. Pre-Operational Testing Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Is each change reviewed for its impact upon the restart/recovery plan?

    

2.

If a change impacts recovery, is the newly estimated downtime calculated?

    

3.

If the change impacts recovery, is the new downtime risk estimated?

    

4.

Are the changes that need to be made to the recovery process documented?

    

5.

Is the notification of changes to the production version of an application documented?

    

6.

Are changes to application systems controlled by an application control change number?

    

7.

Are there procedures to delete unwanted program versions from the source, test, and object libraries?

    

8.

Are program deletion requests documented so that production is authorized to delete programs?

    

9.

Are procedures established to ensure that program versions will go into production on the correct day?

    

10.

If it affects operating procedures, are operators notified of the date new versions go into production?

    

11.

Are procedures established to monitor changed application systems?

    

12.

Do the individuals monitoring the process receive notification that an application system has been changed?

    

13.

Do the people monitoring changes receive clues regarding the areas impacted and the probable problems?

    

14.

Do the people monitoring application system changes receive guidance on what actions to take if problems occur?

    

15.

Are problems that are detected immediately following changes documented on a special form so they can be traced to a particular change?

    

16.

Are the people documenting problems asked to document the impact of the problem on the organization?

    

17.

Is software change installation data collected and documented?

    

18.

Does information services management review and use the feedback data?

    

19.

Does information services management periodically review the effectiveness of installing the software change?

    

Table 12-23. Testing and Training Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Are software maintenance analysts required to develop a test plan?

    

2.

Must each change be reviewed to determine if it has an impact on training?

    

3.

If a change has an impact on training, do procedures require that a training plan be established?

    

4.

Is an inventory prepared of training material so that it can be updated?

    

5.

Does the training plan make one individual responsible for training?

    

6.

Does the training plan identify the results desired from training?

    

7.

Does the training plan indicate the who, why, what, where, when, and how of training?

    

8.

Does the training plan provide a training schedule, including dates?

    

9.

Is an individual responsible for determining if training is acceptable?

    

10.

Are all of the training modules inventoried?

    

11.

Does each training module have a history of the changes made to the module?

    

12.

Is one individual assigned responsibility for testing?

    

13.

Does the test plan list each measurable change objective and the method of testing that objective?

    

14.

Does the training plan list the desired results from testing?

    

15.

Does the training plan address regression testing?

    

16.

Does the training plan address intersystem testing?

    

17.

Is someone responsible for judging whether testing is acceptable?

    

18.

Is an acceptance testing checklist prepared to determine the necessary resources are ready for the test?

    

19.

Does the acceptance testing checklist include the administrative aspects of the test?

    

20.

Is a training checklist prepared which indicates which individuals need training?

    

21.

Is a record kept of whether or not individuals receive training?

    

22.

Is each test failure documented?

    

23.

Is each training failure documented?

    

24.

Are test failures corrected before the change goes into production?

    

25.

Are training failures corrected before the change goes into production?

    

26.

If the change is put into production before testing/training failures have been corrected, are alternative measures taken to assure the identified errors will not cause problems?

    

27.

Is feedback data identified?

    

28.

Is feedback data collected?

    

29.

Is feedback data regularly reviewed?

    

30.

Are control concerns identified?

    

31.

Does information services management periodically review training and testing software changes?

    

Output

Two outputs are produced from Task 1 at various times, as follows:

  1. Interim product acceptance opinion. An opinion as to whether an interim product is designed to meet the acceptance criteria.

  2. Final acceptance decision. Relates to a specific hardware or software component regarding whether it is acceptable for use in production.

There are both interim and final outputs to Task 2. The interim outputs are the various reports that indicate any problems that arise during installation. Problems may relate to installation, deletion of programs from the libraries, or production. Whoever performs these testing tasks should notify the appropriate organization to make adjustments and/or corrections.

The ongoing monitoring process will also identify problems. These problems may deal with both the software and/or the users of the software. For example, problems may occur in the procedures provided to users to interact with the software, or it may be that the users are inadequately trained to use this software. All of these problems need to be reported to the appropriate organization.

The output of Task 3 will answer the questions and/or provide the information in the following subsections.

Is the Automated Application Acceptable?

The automated segment of an application is acceptable if it meets the change specification requirements. If it fails to meet those measurable objectives, the system is unacceptable and should be returned for additional modification. This requires setting measurable objectives, preparing test data, and then evaluating the results of those tests.

The responsibility for determining whether the application is acceptable belongs to the user. In applications with multiple users, one user may be appointed responsible. In other instances, all users may test their own segments or they may act as a committee to verify whether the system is acceptable. The poorest approach is to delegate this responsibility to the information technology department.

Test results can be verified through manual or automated means. The tediousness and effort required for manual verification have caused many information technology professionals to shortcut the testing process. When automated verification is used, the process is not nearly as time-consuming, and tends to be performed more accurately.

A difficult question to answer in terms of acceptability is whether 100 percent correctness is required on the change. For example, if 100 items are checked and 99 prove correct, should the application be rejected because of the one remaining problem? The answer to this question depends on the importance of that one remaining item.

Users should expect that their systems will operate as specified. However, this may mean that the user may decide to install the application and then correct the error after implementation. The user has two options when installing a change known to have an error. The first is to ignore the problem and live with the results. For example, if a heading is misplaced or misspelled, the user may decide that that type of error, although annoying, does not affect the user of the output results. The second option is to make the adjustments manually. For example, if necessary, final totals can be manually calculated and added to the reports. In either case, the situation should be temporary.

Automated Application Segment Failure Notification

Each failure noted during testing of the automated segment of the application system should be documented. If it is known that the change will not be corrected until after the application is placed into production, a problem identification form should be completed to document the problem. However, if the change is to be corrected during the testing process, then a special form should be used for that purpose.

A form for notifying the software maintenance analyst that a failure has been uncovered in the automated segment of the application is illustrated in Work Paper 12-24. This form is to be used as a correction vehicle within the test phase, and should be prepared by the individual uncovering the failure. It is then sent to the software maintenance analyst in charge of the change for correction. The information contained on the automated application segment test failure notification form is described on the form’s completion instructions sheet.

Table 12-24. Form Completion Instructions: Automated Application Segment Test Failure Notification

Field Requirements

 

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Description of Failure

A brief description of the condition that is believed to be unacceptable. In most instances, the detailed information would be presented orally, as would the documentation supporting the failure. The purpose of the form is to record the problem and control the implementation. The information contained in this section includes:

  • Test Date: the date of the test.

  • Failure #: a sequentially increasing number used to control the identification and implementation of problems. If a form is lost or mislaid, it will be noticed because a failure number will be missing.

  • System Change Objective Failed: the measurable change objective that was not achieved.

  • Description of Failure: a brief description of what is wrong.

Recommended Correction

Corrections suggested by the individual uncovering the failure or the software maintenance analyst after an analysis of the problem. The type of information included in the recommendation is:

  • Programs Affected: all the programs that contributed to the failure.

  • Data Affected: all the data elements, records, or files that contributed or were involved in the failure.

  • Description of Correction: a brief description of the recommended solution.

Correction Assignments

This section is completed by the software maintenance analyst to assign the correction of the failure to a specific individual. At a minimum, this should include:

  • Correction Assigned To: the individual making the correction.

  • Date Correction Needed: the date by which the correction should be made.

  • Comments: suggestions on how to implement the solution.

Prepared By

The name of the individual who uncovered the failure.

Date

The date on which the form was prepared.

Application Name: ________ Number: _________Change Ident. # _______

Description of Failure

Test Date _______________________________ Failure # _____________

System Change Objective Failed __________________________________

___________________________________________________________

Desciption of Failure______________________________________________________

___________________________________________________________

  

Recommended Correction

Programs Affected_____________________________________________________

____________________________________________________________

Data Affected _________________________________________________

____________________________________________________________

Description of Correction ________________________________________

____________________________________________________________

  

Correction Assignments

Correction Assigned To __________________________________________

Date Correction Needed _________________________________________

Comments ____________________________________________________

_____________________________________________________________

_____________________________________________________________

  

Prepared By: __________________________ Date: ___________________

Is the Manual Segment Acceptable?

Users must make the same acceptability decisions on the manual segments of the application system as they make on the automated segments. Many of the manual segments do not come under the control of the maintenance systems analyst. However, this does not mean that the correct processing of the total system is not of concern to the maintenance systems analyst.

The same procedures followed in verifying the automated segment should be followed in verifying the manual segment. The one difference is that there are rarely automated means for verifying manual processing. Verifying manual segments can take as much—if not more—time than verifying the automated segment. The more common techniques to verify the correctness of the manual segment include the following:

  • Observation. The person responsible for verification observes people performing the tasks. That person usually develops a checklist from the procedures and then determines whether the individual performs all of the required steps.

  • Application examination. The people performing the task need to evaluate whether they can correctly perform the task. For example, in a data entry operation, the data entry operator may be asked to enter that information in a controlled mode.

  • Verification. The person responsible for determining that the training is correct examines the results of processing from the trained people to determine whether they comply with the expected processing.

If the training is not acceptable, the user must decide again whether to delay the change. In most instances, the user will not delay the implementation of change if there are only minor problems in training, but instead will attempt to compensate for those problems during processing. On the other hand, if it becomes apparent that the users are ill-equipped to use the application, the change should be delayed until the individuals are better trained.

The methods that users can incorporate to overcome minor training deficiencies include the following:

  • Restrict personnel. The new types of processing are performed only by people who have successfully completed the training. Thus, those who need more skills have time to obtain them before they begin using the new procedures or data.

  • Supervisory review. Supervisors can spend extra time reviewing the work of people to ensure that the tasks are performed correctly.

  • Information technology assistance. The software maintenance analysts/programmers can work with user personnel during an interim period to help them process the information correctly.

  • Overtime. Crash training sessions can be held in the evening or on weekends to bring the people up to the necessary skill level.

Training Failure Notification Form

Training failures should be documented at the same level of detail as are failures of the computerized segment. However, procedural errors can cause as many serious problems as can incorrect computer code. Unless these failures are documented, people can easily overlook the problem and assume someone else will correct it.

Each failure uncovered in training should be documented on a training failure notification form. This form should be completed by the individual who uncovers the problem, and then presented to the individual responsible for training for necessary action. A form that can be used to document training failures is illustrated in Work Paper 12-25. The information contained on the training failure notification form is described on the form’s completion instructions sheet.

Table 12-25. Form Completion Instructions: Training Failure Notification

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Application Name

The name by which the application is known.

Number

The application identifier.

Change Ident. #

The sequence number that uniquely identifies the change.

Description of Failure

The details of the training failure need to be described. At a minimum, this would include:

  • Failure #: a sequentially increasing number used to control the failure form.

  • Test Date: the date on which the test occurred.

  • People Not Adequately Trained: the name of individuals, categories of people or departments who could not adequately perform their tasks.

  • Failure Caused By Lack of Training: a description of why the training was inadequate.

Recommended Correction

Suggestions for correcting the failure. This section can be completed either by the individual uncovering the failure and/or by the systems analyst. The type of information helpful in correcting the training failure includes:

  • Training Material Needing Revisions: the specific material that should be modified to correct the problem.

  • New Method of Training Needed: suggestions for varying the training method.

  • People Needing Training: all of the people that may need new training.

  • Description of Correction: a brief explanation of the recommended solution.

Correction Assignments

Assignments made by the individual responsible for training. At a minimum, each assignment would include:

  • Correction Assigned To: name of individual who will make the necessary adjustments to training material.

  • Training Material Needing Correction: the specific training document(s) that need changing.

  • Comments: recommendations on how to change the training material.

Prepared By

The name of the individual who uncovered the failure.

Date

The date on which the failure occurred.

Application Name: __________ Number: _________ Change Ident. # _________

___________________________________________________________________

Description of Failure

Test Date __________________________________ Failure # __________________

People Not Adequately Trained ___________________________________________

____________________________________________________________________

Failure Caused By Lack of Training _________________________________________

____________________________________________________________________

  

Recommended Correction

Training Materials Needing Revisions ________________________________________

_____________________________________________________________________

New Method of Training Needed _____________________________________________________________________

People Needing Training _____________________________________________________________________

______________________________________________________________________

Description of Correction __________________________________________________

______________________________________________________________________

  

Correction Assignments

Correction Assigned To __________________________________________________

Training Material Needing Correction ________________________________________

______________________________________________________________________

Comments _____________________________________________________________

______________________________________________________________________

______________________________________________________________________

  

Prepared By ____________________________________ Date: __________________

Guidelines

Acceptance testing is a critical part of testing. Guidelines to make it effective include the following:

  • Incorporate acceptance criteria into the test plan. Although this chapter suggests a separate test plan and acceptance test plan, they can in fact be incorporated, in which case the test plan will use the acceptance criteria as the test plan objectives.

  • Include information systems professionals on the acceptance test team. The acceptance test team needs information system skills as well as business skills for the areas affected by the hardware/software being acceptance tested. Acceptance testers must be able to understand information systems and to effectively communicate with information systems professionals.

Feedback enables IT management and users to monitor each phase of the software maintenance process. The feedback information relates to the processes and controls operational during each phase. During the installation of the change, management is able to measure the overall success of the software maintenance process. This gathered data is some of the most valuable. The types of feedback information that have proved most valuable include the following:

  • Number of changes installed

  • Number of changes installed by application

  • Number of problems encountered with installed changes

  • Number of old program versions deleted

  • Number of new program versions installed

  • Number of conditions monitored

  • Number of changes not installed on time

The following should help in performing Task 3:

  • Making test adjustments. Corrections to problems should be implemented in the application system and then the system should be retested. When a new change is entered to the application system (even a change made during testing), the maintenance software analyst should not assume that previously tested segments will work correctly. It is quite possible that the new change has caused problems to unchanged portions. Unfortunately, it may mean that much of the testing already completed may have to be repeated.

  • Making training adjustments. Identified training adjustments can be made in numerous ways. The methods selected will obviously depend on the type of failure uncovered. In some instances, a single individual may have been overlooked and the training can be presented to that person individually. In other cases, new training material may have to be prepared and taught.

The procedures described in this section for developing training materials apply equally to correcting training materials. In addition, if people have been improperly instructed, steps may have to be taken to inform them of the erroneous training and then to provide them with the proper training.

Summary

The IT department, both developers and testers, have processes in place to build the specified system. Developers and/or testers might challenge those specifications as accurate and complete; however, in many organizations, developers implement the specifications, and testers test to determine whether or not those specifications have been implemented as specified.

At the conclusion of the IT development and test processes, software can be placed into an operational state. This step addresses testing after the IT developers and testers have completed their work processes. This testing may involve the team that developed and tested the software, or it may be done independently of the software developers and testers.

The acceptance and operational testing included in this step involves acceptance testing by the customer/users of the software; pre-operational testing ensures that when the software system is moved from a test environment to a production environment that it performs correctly and that when the software system is changed, it is tested to ensure both the changed and unchanged portions still perform as specified.

At the conclusion of acceptance and operational testing, a decision is made as to whether the software should be placed into a production state. At that point, testing of that software system is complete. The remaining step (Step 7) is a post analysis by the testers to evaluate the effectiveness and efficiency of testing the software system and to identify areas in which testing could be improved in future projects.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset