Chapter 18. Testing COTS and Contracted Software

(Note: Much of the material in this chapter derives from a forthcoming book on testing and supporting COTS applications by William E. Perry, Randall Rice, William Bender, and Christina Laiacona.)

Increasingly, organizations are buying software from stores. This software is sometimes referred to as “shrink-wrap” software or commercial off-the-shelf (COTS) software. The fact that it is commercially available does not mean that it is defect free, or that it will meet the needs of the user. COTS software must be tested.

Contracted software, or outsourced software, is a variation of COTS. The commonality between the two is that an organization other than the one using the software builds and tests the software. In contrast to COTS software, however, software development that is outsourced entails a closer relationship between the organization using the software and the organization building the software. Often, that closer relationship allows the contracting organization access to developers and/or the internal documentation regarding the software under development.

Over time, more organizations will rely on COTS software than software developed in-house. Although organizations will therefore need fewer or no software developers, in-house testing will still need to be performed. This chapter explains the role of testers when their organization acquires COTS software.

Overview

COTS software must be made to look attractive if it is to be sold. Thus, developers of COTS software emphasize its benefits. Unfortunately, there is often a difference between what the user believes the software can accomplish and what it actually does accomplish. Therefore, this chapter recommends both static and dynamic testing. Static testing concentrates on the user manual and other documentation; dynamic testing examines the software in operation. Note that normally you will have to purchase the software to perform these tests (unless the software developer provides a trial version for testing purposes). However the cost to purchase is usually insignificant compared to the problems that can be caused by software that does not meet an organization’s needs. The cost of testing is always less than the cost of improper processing.

The testing process in this chapter is designed for COTS software, and for contracted or outsourced software for which a close relationship with the developer does not exist. However, the testing process presented in this chapter can be modified to test contracted software developed according to an organization’s specified requirements. This chapter first explains the differences between COTS and contracted software. The chapter then discusses the changes that may need to be made to the testing process when software is custom developed under contract with outside sources.

COTS Software Advantages, Disadvantages, and Risks

This section distinguishes between COTS and contracted software, highlighting not only the differences but also the inherent advantages and disadvantages of COTS software and the testing challenges organizations face with COTS software. The final subsection details the risks organizations face when implementing COTS software.

COTS Versus Contracted Software

The two major differences between COTS and contracted software are as follows:

  • Who writes the requirements. With COTS software, the developer writes the requirements. With contracted software, the contracting organization writes the requirements. Testers then perform verification testing on the requirements specified in the contract. This type of testing cannot be performed with COTS software.

  • Ability to influence and test during development. During development, the only limits imposed on testers of contracted software are the contract provisions. In contrast, an organization seeking software cannot usually test COTS software during development at all (unless as part of a beta testing scheme).

Therefore, the main difference between the testing of COTS software and contracted software is that when the development of software is contracted, testing can occur prior to the delivery of the software.

COTS Advantages

Organizations gain multiple potential advantages when deploying COTS products, including the following:

  • They reduce the risks the come with internal software development. Software projects are inherently risky. With COTS products, the vendor assumes the risks.

  • They reduce the costs of internal software development. Software development is costly. With COTS products, the vendor spreads the costs over a population of customers.

  • They increase the speed and reliability of delivering applications. You don’t have to wait months or years for a system to become reality. COTS products are available immediately. The time-consuming part is the acquisition, integration, and testing of the products to deliver the right solution.

  • They increase the possible sources of software. In-house software typically has one or a few sources: in-house developers, contracted developers, and perhaps outsourced development. In contrast, a variety of vendors might offer COTS products to meet a need.

  • They tend to be higher-quality software. Although COTS products will have defects, COTS products have fewer overall defects as compared to software developed in-house. According to Capers Jones, in his book Software Assessments, Assessments, Benchmarks, and Best Practices (Addison-Wesley Professional, 2000), management information systems (MIS) applications have an average defect removal efficiency of about 85 percent. This is compared to 91 percent for commercial (vendor) software. This metric is derived by dividing the defects found by the producer by the total defects found during the life span of the application. The metric does not take into account defect severity.

  • They enable organizations to leverage newer technology. To stay competitive and thus in business, COTS product vendors are motivated to stay current with technology. As operating systems and other applications progress in technology, COTS products must also evolve to maintain and support their customer base.

  • They enable easier integration with other applications. Although there are integration issues with COTS products in general, applications developed in-house may have even more integration issues because private, nonstandard interfaces may be developed and used by in-house developers.

COTS Disadvantages

COTS products are not without their disadvantages (challenges), including the following:

  • Selecting the right package. There may be many alternatives in the marketplace, but getting the best fit may require time to evaluate several alternatives.

  • Finding the right product. After searching for and evaluating multiple products, you may realize that there are no acceptable products to meet your requirements.

  • Product changes driven by vendors and other users. When it comes to COTS products, you are one voice among many in the marketplace. Although you might like to see certain product features implemented, you have little control over a product’s direction. The U.S. federal government used to have a lot of control over certain product directions; in the PC era, however, private-sector demand has greatly increased, and as a result, government influence has decreased.

  • Dealing with vendor licensing and support issues. A frequent complaint among COTS product customers is that vendors often change their licensing practices with little or no notice to the customers. Many times, the changes favor the vendor.

  • Integrating with existing and future applications. Many times, COTS products are used with other products. The integration with some products may be rather seamless, whereas other products may require extensive effort to develop interfaces.

  • Testing from an external perspective. As compared to applications developed in-house, COTS products do not allow the tester to have an internal perspective regarding the product for test-case development. Black-box testing (explained in the next section) is almost always the testing approach.

  • Continual testing for future releases. COTS testing is never done. The product evolves, and the environment and interfaces evolve (and therefore interfaced applications).

  • Lack of control over the product’s direction or quality. Perhaps one of the most troubling issues in COTS products is that the customer has no control over certain things, such as the future direction of the product. Sometimes products are sold to other vendors, who then abandon support for a product just to make it noncompetitive with other products they own. Obviously, such practices frustrate organizations when they find a COTS product that meets their needs (but that might not meet their needs in the future).

Implementation Risks

As COTS products are deployed, some of the risks are as follows:

  • Functional problems. Some of the product features will not work at all, and some will work in a way that is less than desired. This risk can be mitigated by defining feature requirements before evaluating products.

  • Security issues. Currently, we seem to be in a “fix on failure” mode when it comes to COTS security. About the only line of defense is for customer and users to be constantly aware of new security vulnerabilities and stay current with patches.

  • Compatibility issues. There is also the risk that the product may work well in some environments but not in others. This risk can be mitigated by evaluating the product in all applicable environments.

  • Integration and interoperability issues. The product implemented may require extensive efforts to integrate it with other applications. This risk can be mitigated by performing a “proof of concept” in the evaluation phase, as well as talking with other customers who have successfully integrated the same product into their operations.

  • Vendor issues. There is always a risk that a vendor will go out of business, sell a product to a competitor, or drop support for a product. This risk can be mitigated to a small degree by making vendor stability and support points of evaluation criteria.

  • Procurement and licensing issues. These are typically contracting concerns, but can also be mitigated to some extent by including such issues as evaluation criteria.

  • Testing issues. Project sponsors and management often assume that the COTS vendor does most of the testing. Because of this misperception, testing is minimized, and major problems are missed until after deployment. You can minimize this risk with a full understanding of testing as a shared task between vendors and customers, and an understanding that testing is an ongoing job.

It is generally advisable with COTS software to have a repository for user-reported problems, perhaps someone appointed as manager for a specific COTS software package. All problems are reported to that individual. The individual will determine what action needs to be taken, and then notify all the software users.

Testing COTS Software

The testing of COTS products presents a number of challenges, including the following:

  • Unknown structure. Structure in this case means more than just code—it can extend to interfaces, data stores, add-ins, APIs, and other elements of the architecture.

  • Unknown functional requirements. COTS products are a good example of testing something without benefit of documented requirements. You may have access to a user manual or training guide, but those are not the same as documented requirements.

  • Requires an external, black-box approach. Testing is almost always a black-box effort. This means there will be some functional tests that are unnecessary and some structural tests that are missed.

  • Testing integration “glue” with other applications. Integration glue is what holds one COTS application to other applications (and perhaps to the operating system, too). This glue may be developed by vendors or by in-house developers. The challenge is to understand these points of integration, where they are used, and how to validate them.

  • Compatibility across platforms. Many organizations have multiple platforms to span when using a particular COTS product. Ideally, the product will be compatible on all the platforms. However, even in operating systems from the same vendor, a product often behaves differently on various platforms. Some COTS products will not work at all on some related operating systems. For this reason, a degree of compatibility testing is often required.

  • Release schedules. Just when you think you have one test of a COTS product finished, a new version may be released that totally changes the look and feel of the product. These kinds of changes also affect the test cases, test scripts, and test data you have in place. Most COTS product release schedules are spaced months apart, but there can also be service packs and subreleases to fix problems.

  • Continual regression testing. When you consider the number of elements that work together in the COTS environment, nearly everything is in flux. That’s why regression testing is needed. You do not have to test everything continuously, but you do need a workable set of test cases you can run as changes are seen in the operational environment.

  • Technology issues. The COTS product itself is only one element of the application to be tested. Because technology changes rapidly, the overall technical environment changes rapidly, too. These changes can include standards as well as software and hardware components. This fact reinforces the idea that testing is never really finished for a COTS product.

  • Test-tool issues. The test automation issues in COTS are huge. Ideally, you want automated test scripts that do not require extensive maintenance. There are two problems here, at least:

    • The COTS products under test will change (probably often) to keep up with new technology.

    • Test tools will change to keep up with technology about 6 to 12 months after the technology is introduced.

This means that there is often a window of time when you will not be able to automate the tests you would like to automate, or have been able to automate in the past. This situation is unlikely to improve.

Testing Contracted Software

The major differences between testing COTS software and testing contracted software are as follows:

  • Importance of vendor reputation. COTS testing focuses more on the application. With COTS, an organization is buying a specific application. Their concern is the applicability and quality of that application to achieve a specific organizational objective. With contracted software, the application is not developed in-house. In both cases, testers should be involved in the formulation of selection criteria to identify a vendor to build and maintain the software.

  • Access to software developers. Rarely does an organization acquiring COTS software have access to the software developers. Normally, they will work with a vendor’s marketing group and help desk for answers to questions regarding a COTS application. With contracted software, the contract can indicate the type of access that the acquiring organization wants with the software developers.

  • Ability to impact development. With COTS software, the acquiring organization rarely has the ability to influence the development of the application. They may influence changes to the software but rarely the initial release of the software. With contracted software, the contract can indicate that the acquiring organization can meet with developers and propose alternative development methods for building and documenting the software.

  • Requirements definition. With COTS software, the acquiring organization does not write the requirements for the software package. With contracted software, the acquiring organization writes the requirements. Therefore, with contracted software, if testers are going to be involved prior to acceptance testing they need to focus on the requirements to ensure that they are testable. In addition, testers may want to use verification-testing methods such as participating in reviews throughout the development cycle.

  • The number of vendors involved in software development. With COTS software, generally only a single vendor develops the software. With contracted software, there may be multiple vendors. For example, some organizations use one vendor to select another vendor that will build the software. Some organizations contract for another vendor to test the software. Some contracted software may involve testers with multiple vendors and the coordination among those vendors.

Objective

The objective of a COTS testing process is to provide the highest possible assurance of correct processing with minimal effort. However, the testing process should be used for noncritical COTS software. If the software is critical to the ongoing operations of the organization, the software should be subject to a full scale of system testing, which is described in Part Three of this book. The testing in the process might be called 80-20 testing because it will attempt with 20 percent of the testing effort to catch 80 percent of the problems. That 80 percent should include almost all significant problems (if any exist). Later in this chapter, you also learn how to test contracted software.

Concerns

Users of COTS software should also be concerned about the following:

  • Task/items missing. A variance between what is advertised or included in the manual versus what is actually in the software.

  • Software fails to perform. The software does not correctly perform the tasks/items it was designed to perform.

  • Extra features. Features not specified in the instruction manual may be included in the software. This poses two problems. First, the extra tasks may cause problems during processing; and second, if you discover the extra task and rely on it, it may not be included in future versions.

  • Does not meet business needs. The software does not fit with the user’s business needs.

  • Does not meet operational needs. The system does not operate in the manner, or on the hardware configuration, expected by the user.

  • Does not meet people needs. The software does not fit with the skill sets of the users.

Workbench

A workbench for testing COTS software is illustrated in Figure 18-1. The workbench shows three static tasks: test business fit, test system fit, and test people fit. A fourth task is the dynamic test when the software is in an executable mode and the processing is validated. As stated earlier, the tests are designed to identify the significant problems, because a tester cannot know all the ways in which COTS software might be used (particularly if the software is disseminated to many users within an organization).

Workbench for testing COTS software.

Figure 18-1. Workbench for testing COTS software.

Input

This testing process requires two inputs: the manuals (installation and operation) that accompany the COTS software, and the software itself. The manuals describe what the software is designed to accomplish and how to perform the tasks necessary to accomplish the software functions. Note that in some instances the user instructions are contained within the software. In such cases, the first few screens of the software may explain how to use the software.

Do Procedures

The execution of this process involves four tasks plus the check procedures. The process assumes that those conducting the test know how the software will be used in the organization. If the tester does not know how the software will be used, an additional step is required for the tester to identify the software functionality that users need. The following subsections describe the four tasks.

Task 1: Test Business Fit

The objective of this task is to determine whether the software meets your needs. The task involves carefully defining your business needs and then verifying whether the software in question will accomplish them. The first step of this task is defining business functions in a manner that can be used to evaluate software capabilities. The second step of this task is to match software capabilities against business needs. At the end of this task, you will know whether a specific software package is fit for your business.

Step 1: Testing Needs Specification

This test determines whether you have adequately defined your needs, which should be defined in terms of the following two categories:

  • Products/reports output. Products/reports output refers to specific documents that you want produced by the computer system. In many instances, the style and format of the output products are important. Consider, for instance, a check/invoice accounting system. The specific location of the check does not have to be defined but, instead, just the categories of information to be included on the check. Computer-produced reports may also be important for tax information (e.g., employee withholding forms), financial statements where specific statements are wanted (e.g., balance sheets or statements of income and expense), or customer invoice and billing forms (which you might want preprinted to include your logo and conditions of payment).

  • Management information. This category tries to define the information needed for decision-making purposes. In the output product/report category, you were looking for a document; in this case, you are looking for information. How that information is provided is unimportant. Therefore, the structure of the document and what the documents are (or their size, frequency, or volume) are not significant. All you need is information.

No form is provided for documenting these needs; the method of documentation is unimportant. Writing them on a yellow pad is sufficient. However, it is important to define, document, and have those needs available when you begin your software-selection process.

After documenting your needs, evaluate them using the 10-factor test of completeness of business requirements illustrated in Work Paper 18-1. This evaluation consists of a cause-effect test that attempts to identify the potential causes of poor needs definition. This test indicates the probability that you have completely documented your needs. To complete this evaluation, follow these steps:

  1. Familiarize yourself with the documented business needs.

  2. Consider each of the ten items in Work Paper 18-1 one at a time as they relate to the documented business needs. This review challenges the adequacy of your business needs based on your personal knowledge of the business. Thus, this test must be done by someone knowledgeable in the business. (Note: It can be done by two or more people, if appropriate. In such cases, a consensus can be arrived at by either averaging the assessments or negotiating a common assessment.)

  3. Indicate your agreement or disagreement with the statement based on your understanding of each item. Consider, for example, the first item in Work Paper 18-1 (i.e., that the system will experience very few changes over time). For each item assessed with regard to that statement, indicate whether you:

    • SA. Strongly agree with the statement.

    • A. Agree with the statement.

    • N. Neither agree nor disagree with the statement (i.e., are basically neutral and are not sure whether the statement is applicable or inapplicable).

    • D. Disagree with the statement.

    • SD. Strongly disagree with the statement.

    Check the appropriate assessment column for each of the ten statements.

  4. Calculate an assessment score for each of the ten statements as follows: For each item checked SA, score 5 points; for each S, score 4 points; for each N, score 3 points; for each D, score 2 points; for each SD, score 1 point. Your final score will range between 10 and 50.

    The score can be assessed as follows:

    • 10–25 points: Poorly defined requirements. You are not ready to consider buying a software package; do some additional thinking and discussion about this need.

    • 26–37 points: The needs are barely acceptable, particularly at the low end of the range. Although you have a good start, you may want to do some clarification of the reports or decision-making information.

    • 38–50 points: Good requirements. In this range, you are ready to continue the software testing process.

Table 18-1. Test of Completeness of Business Requirements

Assessment Score =

        

Legend:

SA = Strongly agree

A = Agree

N = Neither agree nor disagree

D = Disagree

SD = Strongly disagree

        
 

ASSESSMENT

COMMENTS

SA (5)

A (4)

N (3)

D (2)

SD (1)

1.

The system will experience few changes over time.

      

2.

All involved parties agree the needs are well defined.

      

3.

The use of the results of the application will require very little judgment on the part of the users of the computer outputs.

      

4.

The input to the system is well defined.

      

5.

The outputs from the system and the decision material are well defined.

      

6.

The users of the system are anxious to have the area automated.

      

7.

The users want to participate in the selection and implementation of the software.

      

8.

The users understand data processing principles.

      

9.

The application does not involve any novel business approach (i.e., an approach that is not currently being used in your business).

      

10.

The users do not expect to find other good business ideas in the selected software.

      

At the conclusion of this test, you will either go on to the next test or clarify your needs. My experience indicates that it is a mistake to pass this point without well-defined needs.

Step 2: Testing CSFs

This test tells whether the software package will successfully meet your business needs, or critical success factors (CSFs). CSFs are those criteria or factors that must be present in the acquired software for it to be successful. You might ask whether the needs are the same as the CSFs. They are, but they are not defined in a manner that makes them testable, and they may be incomplete. Often the needs do not take into account some of the intangible criteria that make the difference between success and failure. In other words, the needs define what we are looking for, and the critical success factors tell us how we will evaluate that product after we get it. They are closely related and complementary, but different in scope and purpose.

The following list indicates the needs/requirements for an automobile, and is then followed by the CSFs on which the automobile will be evaluated:

  • Automobile requirements/needs:

    • Seats six people

    • Four doors

    • Five-year guarantee on motor

    • Gets 20 miles or more per gallon

    • Costs less than $12,000

  • Critical success factors:

    • Operates at 20.5 cents or less per mile

    • Experiences no more than one failure per year

    • Maintains its appearance without showing signs of wear for two years

Some of the more common CSFs for COTS applications are as follows:

  • Ease of use. The software is understandable and usable by the average person.

  • Expandability. The vendor plans to add additional features in the future.

  • Maintainability. The vendor will provide support/assistance to help utilize the package in the event of problems.

  • Cost-effectiveness. The software package makes money for your business by reducing costs and so on.

  • Transferability. If you change your computer equipment, the vendor indicates that they will support new models or hardware.

  • Reliability. Software is reliable when it performs its intended function with required precision.

  • Security. The system has adequate safeguards to protect the data against damage (for example, power failures, operator errors, or other goofs that could cause you to lose your data).

The CSFs should be listed for each business application under consideration. Work Paper 18-2, which is a test of fit, provides space to list those factors. Note that in most applications there are eight or fewer CSFs. Therefore, this test is not as time-consuming as it might appear.

Table 18-2. Test of Fit

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Business Application

Name of business application being tested.

Number

A number which sequentially identifies a CSF.

Critical Success Factors (CSF)

A factor which those responsible for the success of the business application must meet in order for the business application to be successful.

Meets CSF

An assessment as to whether a specific CSF has been met, with a comments column to explain how the assessment was determined.

Business Application

_________________________________________________________

     

NUMBER

CRITICAL SUCCESS FACTORS

MEETS CSF

YES

NO

COMMENTS

     
     
     
     
     
     
     
     
     
     
     
     
     
     
     

Once Work Paper 18-2 has been considered, it can be used to test the applicability of the software package under evaluation. (Work Paper 18-2 provides space to identify the software package being tested.) When making the evaluation, consider the following factors:

  • Thorough understanding of the business application

  • Knowledge of the features of the software package

  • Ability to conceptualize how the software package will function on a day-to-day basis

  • Use of CSFs to indicate whether you believe one of the following:

    • There is a high probability that the software package will meet the CSF. (Mark an X in the Yes column.)

    • The software package does not have a high probability of meeting the CSF. (Mark an X in the No column.)

    • There is more or less than a 50–50 probability of the software package’s success. (Mark an X in the appropriate column and then clarify your assessment in the Comments column.)

At the conclusion of this test, you will have matched your business needs against the software capabilities and assessed the probability of the software’s success. If the probability of success is low (i.e., there are several No responses or highly qualified Yes responses), you should probably not adopt this software package. Clearly, additional study and analysis is warranted before you move forward and expend the resources to implement a potentially unsuccessful system.

Task 2: Test Operational Fit

The objective of this task is to determine whether the software will work in your business. Within your business, several constraints must be satisfied before you acquire the software, including the following:

  • Computer hardware constraints

  • Data preparation constraints

  • Data entry constraints

  • Other automated-processing constraints (e.g., if data from this software package must be fed into another software package, or receive data from another software package, those interface requirements must be defined)

At the end of this task, you will know whether the software fits into the way you do business and whether it will operate on your computer hardware.

This task involves three steps to ensure an appropriate fit between the software being evaluated and your in-house systems.

Step 1: Test Compatibility

This is not a complex test. It involves a simple matching between your processing capabilities and limitations and what the vendor of the software says is necessary to run the software package. The most difficult part of this evaluation is ensuring that the multiple software packages can properly interface.

This test is best performed by preparing a checklist that defines your compatibility needs. Software vendors are generally good about identifying hardware requirements and operating system compatibility. They are generally not good at identifying compatibility with other software packages.

In addition to the hardware on which the software runs, and the operating system with which it must interact, there are two other important compatibilities: compatibility with other software packages and compatibility with available data. If you have no other software packages that you want to have interact with this one, or no data on computer-readable media, you need not worry about these aspects of compatibility. However, as you do more with your computer, these aspects of compatibility will become more important (while the hardware and operating compatibility will become routine and easy to verify).

Finding someone who can tell you whether you have program and/or data compatibility is difficult. That someone must understand data formats, know what data format programs use, and know that those programs or data will work when they are interconnected. In many instances, trial and error is the only method of determination. However, the fact that one program cannot read data created by another program does not mean that the original data cannot be reused. For example, some utility programs can convert data from one format to another.

To prepare a compatibility list for the purpose of testing, use the information listed here:

  • Hardware compatibility. List the following characteristics for your computer hardware:

    • Vendor

    • Amount of main storage

    • Disk storage unit identifier

    • Disk storage unit capacity

    • Type of printer

    • Number of print columns

    • Type of terminal

    • Maximum terminal display size

    • Keyboard restrictions

  • Operating systems compatibility. List the following for the operating system used by your computer hardware:

    • Name of operating system (e.g., UNIX or Windows)

    • Version of operating system in use

  • Program compatibility. List all the programs that you expect or would like this specific application to interact with. Be sure that you have the name of the vendor and, if applicable, the version of the program. Note that this linkage may be verifiable only by actually attempting to interact two or more systems using common data.

  • Data compatibility. In many cases, program compatibility will answer the questions on data compatibility. However, if you created special files, you may need descriptions of the individual data elements and files. Again, as with program compatibility, you may have to actually verify through trial and error whether the data can be read and used by other programs. Note that in Step 3 (demonstration) you will have the opportunity to try to use your own data or programs to see whether you can utilize common data and pass parameters from program to program.

Step 2: Integrate the Software into Existing Work Flows

Each computer business system makes certain assumptions. Unfortunately, these assumptions are rarely stated in the vendor literature. The drawback is that you often must do some manual processing functions that you may not want to do in order to utilize the system. In such cases, you can search for COTS software to automate the manual processes.

The objective of this test is to determine whether you can plug the COTS software into your existing system without disrupting your entire operation. Remember the following:

  • Your current system is based on a certain set of assumptions.

  • Your current system uses existing forms, existing data, and existing procedures.

  • The COTS software is based on a set of assumptions.

  • The COTS software uses a predetermined set of forms and procedures.

  • Your current system and the new COTS software may be incompatible.

  • If they are incompatible, the current business system and the COTS software are not going to change—you will have to.

  • You may not want to change—then what?

The process for test of fit of the COTS software into your existing system requires you to prepare a document flow diagram or narrative description. A document flow diagram is a pictorial or narrative description of how your process is performed. That is, you plug the COTS software into your existing system and then determine whether you like what you see. If you do, the COTS software has passed this test. If not, you either have to change your existing method of doing work or search for other software.

The data flow diagram is really more than a test. At the same time that it tests whether you can integrate the COTS software into your existing system, it shows you how to do it. It is both a system test and a system design methodology incorporated into a single process. So, to prepare the document flow narrative or document flow description, these three tasks must be performed:

  1. Prepare a document flow of your existing system. Through personal experience or inquiry, quickly put down in document flow format the steps required to complete the process as it is now performed. Because there will be 15 or fewer steps in most instances, this should take only a few minutes.

  2. Add the COTS software’s responsibility to the data flow diagram. Use a colored pencil to cross out each of the tasks now being performed manually that will be performed by the computer. Indicate the tasks you will continue to perform manually in a different pencil color. If the computer is going to perform tasks that were not performed before, indicate those with a third color. At the end of this exercise, you will have a clearly marked list of which manual tasks were replaced by the computer, which manual tasks will remain as such, and which new tasks have been added.

  3. Modify the manual tasks as necessary. Some of the manual tasks can stay as is; others will need to be added or modified. Again, do this in a different color. Difference pencil colors enable you to highlight and illustrate these changes.

The objective of this process is to illustrate the type and frequency of work flow changes that will be occurring. You can see graphically illustrated what will happen when the computer system is brought into your organization. For example, there might be tasks performed now that weren’t performed before or tasks that were previously performed but are no longer necessary or tasks that had been performed by people which will now be performed by the computer. Having the computer perform those tasks might mean that the oversight that people had been providing will not be available any more.

At the end of this test, you must decide whether you are pleased with the revised work flow. If you believe the changes can be effectively integrated into your work flow, the potential COTS software integration has passed the test. If you think work-flow changes will be disruptive, you may want to fail the software in this test and either look for other software or continue manual processing.

If the testing is to continue, prepare a clean data flow diagram indicating what actions need to be taken to integrate the computer system into your organization’s work flow. This new data flow diagram becomes your installation plan of action. It will tell you what changes need to be made, who is involved in them, what training might be necessary, and areas of potential work flow problems.

Step 3: Demonstrate the Software in Action

This test analyzes the many facets of software. Software developers are always excited when their program goes to what they call “end of job.” This means that, among other things, it executes and concludes without abnormally terminating (i.e., stops after doing all the desired tasks). Observing the functioning of software is like taking an automobile for a test drive. The more rigorous the test, the greater the assurance you are getting what you expect.

Demonstrations can be performed in either of the following ways:

  • Computer store, controlled demonstration. In this mode, the demonstration is conducted at the computer store, by computer store personnel, using their data. The objective is to show you various aspects of the computer software, but not to let you get too involved in the process. This is done primarily to limit the time involved in the demonstration.

  • Customer-site demonstration. In this mode, the demonstration takes place at your site, under your control, by your personnel, using your information. It is by far the most desirable of all demonstrations, but many computer stores may not permit it unless you first purchase the COTS software.

These aspects of computer software should be observed during the demonstration:

  • Understandability. As you watch and listen to the demonstration, you need to evaluate the ease with which the operating process can be learned. If the commands and processes appear more like magic than logical steps, you should be concerned about implementation in your organization. If you have trouble figuring out how to do something, think about how difficult it may be for some of your clerical personnel who understand neither the business application nor the computer.

  • Clarity of communication. Much of the computer process is communication between man and machine. That is, you must learn the language of the computer software programs in order to communicate with the computer. Communication occurs through a series of questions and responses. If you do not understand the communications, you will have difficulty using the routine.

  • Ease of use of instruction manual. While monitoring the use of the equipment, the tasks being demonstrated should be cross-referenced to the instruction manual. Can you identify the steps performed during the demonstration with the same steps in the manual? In other words, does the operator have to know more than is included in the manual, or are the steps to use the process laid out so clearly in the manual that they appear easy to follow?

  • Functionality of the software. Ask to observe the more common functions included in the software: Are these functions described in the manual? Are these the functions that the salesperson described to you? Are they the functions that you expected? Concentrate extensively on the applicability of those functions to your business problem.

  • Knowledge to execute. An earlier test has already determined the extent of the salesperson’s knowledge. During the demonstration, evaluate whether a lesser-skilled person could as easily operate the system with some minimal training. Probe the demonstrator about how frequently he runs the demonstration and how knowledgeable he is about the software.

  • Effectiveness of help routines. Help routines are designed to get you out of trouble. For example, if you are not sure how something works, you can type the word “help” or an equivalent, and the screen should provide you additional information. Even without typing “help,” it should be easy to work through the routines from the information displayed onscreen. Examine the instructions and evaluate whether you believe you could have operated the system based on the normal instructions. Then ask the operator periodically to call the help routines to determine their clarity.

  • Evaluate program compatibility. If you have programs you need to interact with, attempt to have that interaction demonstrated. If you purchased other software from the same store where you are now getting the demonstration, they should be able to show you how data is passed between the programs.

  • Data compatibility. Take one of your data files with you. Ask the demonstrator to use your file as part of the software demonstration. This will determine the ease with which existing business data can be used with the new software.

  • Smell test. While watching the demonstration, let part of your mind be a casual overseer of the entire process. Attempt to get a feel for what is happening and how that might affect your business. You want to have a sense of whether you feel good about the software. If you have concerns, attempt to articulate them to the demonstrator as well as possible to determine how the demonstrator responds and addresses those concerns.

To determine whether an individual has the appropriate skill level to use the COTS software, involve one or more typical potential users of the COTS software in the demonstrations (i.e., Task 3) and in the validation of the software processing (i.e., Task 4). If the selected users can perform those dynamic tests with minimal support, it is reasonable to assume that the average user will possess the skills necessary to master the use of the COTS software. On the other hand, if the selected user appears unable to operate the software in a dynamic mode, it is logical to assume that significant training and/or support will be required to use this COTS software.

Task 3: Test People Fit

The objective of this task is to determine whether your employees can use the software. This testing consists of ensuring that your employees have or can be taught the necessary skills.

This test evaluates whether people possess the skills necessary to effectively use computers in their day-to-day work. The evaluation can be of current skills or the program that will be put into place to teach individuals the necessary skills. Note that this includes the owner-president of the organization as well as the lowest-level employee.

First you select a representative sample of the people who will use the software. The sample need not be large. Then this group is given training, which might involve simply handing someone the manuals and software. The users then attempt to use the software for the purpose for which it is intended. The results of this test will show one of the following:

  1. The software can be used as is.

  2. Additional training/support is necessary.

  3. The software is not usable with the skill sets of the proposed users.

Task 4: Acceptance-Test the Software Process

The objective of this task is to validate that the COTS software will, in fact, meet the functional and structural needs of users.

We have divided testing into functional and structural testing, which also could be called correctness and reliability testing. “Correctness” means that the functions produce the desired results. “Reliability” means that the correct results will be produced under actual business conditions.

Step 1: Create Functional Test Conditions

It is important to understand the difference between correctness and reliability because such an understanding affects both testing and operation. Let’s look at a test example to verify whether gross pay was properly calculated. This could be done by entering a test condition showing 30 hours of work at $6 per hour. If the program works correctly, it produces $180 gross pay. If this happens, we can say that the program is functionally correct. These are the types of tests that should be prepared under this category.

The types of test conditions that are needed to verify the functional accuracy and completeness of computer processing include the following:

  • All transaction types to ensure they are properly processed

  • Verification of all totals

  • Assurance that all outputs are produced

  • Assurance that all processing is complete

  • Assurance that controls work (e.g., input can be balanced to an independent control total)

  • Reports that are printed on the proper paper, and in the proper number of copies

  • Correct field editing (e.g., decimal points are in the appropriate places)

  • Logic paths in the system that direct the inputs to the appropriate processing routines

  • Employees who can input properly

  • Employees who understand the meaning and makeup of the computer outputs they generate

The functional test conditions should be those defined in the test plan. However, because some of the test methods and business functions may be general in nature, the interpretation and creation of specific test conditions may require a significant increase of the test conditions. To help in this effort, a checklist of typical functional test conditions is provided as Work Paper 18-3.

Table 18-3. Functional Test Condition Checklist

  

YES

NO

N/A

COMMENTS

 

Have tests for the following conditions been prepared?

    

1.

Test conditions for each input transaction

    

2.

Variations of each input transaction for each special processing case

    

3.

Test conditions that will flow through each logical processing path

    

4.

Each internal mathematical computation

    

5.

Each total on an output verified

    

6.

Each functional control (e.g., reconciliation of computer controls to independent control totals)

    

7.

All the different computer codes

    

8.

The production of each expected output

    

9.

Each report/screen heading and column heading

    

10.

All control breaks

    

11.

All mathematical punctuation and other editing

    

12.

Each user’s preparation of input

    

13.

Completeness of prepared input

    

14.

User’s use of output, including the understanding and purpose for each output

    

15.

A parallel test run to verify computer results against those which were produced manually

    

16.

Matching of two records

    

17.

Nonmatching of two records

    

The objective of this checklist is to help ensure that sufficient functional test conditions are used. As test conditions for the types listed on Work Paper 18-3 are completed, place a check mark next to that line. At the completion of the test conditions, those types of functional test conditions that have not been checked should be evaluated to determine whether they are needed. The checklist is designed to help ensure the completeness of functional test conditions.

Step 2: Create Structural Test Conditions

Structural, or reliability, test conditions are challenging to create and execute. Novices to the computer field should not expect to do extensive structural testing. They should limit their structural testing to conditions closely related to functional testing. However, structural testing is easier to perform as computer proficiency increases. This type of testing is quite valuable.

Some of the easier-to-perform structural testing relates to erroneous input. In some definitions of testing, this reliability testing is included in functional testing. It is included here because if the input is correct, the system performs in a functionally correct way; therefore, incorrect input is not a purely functional problem.

Most of the problems that are encountered with computer systems are directly associated with inaccurate or incomplete data. This does not necessarily mean that the data is invalid for the computer system. Consider the following example.

A photographic wholesaler sells film only by the gross. The manufacturer shrink-wrapped film in lots of 144, and the wholesaler limited sales to those quantities. If a store wanted less film, they would have to go to a jobber and buy it at a higher price. A small chain of photo-processing stores ordered film from this manufacturer. Unfortunately, the clerks did not really understand the ordering process. They knew only that they would get 144 rolls of film when they ordered. On the order form submitted to the manufacturer, the clerks indicated a quantity of 144. This resulted in 144 gross of film being loaded onto a truck and shipped to the shopping center. The small photo shop could not store that much film, and 143 gross were returned to the wholesaler. The net result was lost money for the manufacturer. In this case, 144 was a valid quantity, but incorrect for the desired order. This is a structural problem that needs to be addressed in the same manner that entering 144 on the computer when you meant to enter 14 must be addressed.

The second part of structural testing deals with the architecture of the system. Architecture is a data processing term that describes how the system is put together. It is used in the same context that an architect designs a building. Architectural problems that could affect computer processing include the following:

  • Internal limits on the number of events that can occur in a transaction (e.g., number of products that can be included on an invoice)

  • Maximum size of fields (e.g., quantity only 2 positions in length, making it impossible to enter an order for more than 99 items)

  • Disk storage limitations (e.g., you are permitted to have only X customers)

  • Performance limitations (e.g., the time to process transactions jumps significantly when you enter more than X transactions)

These are but a few of the potential architectural limitations placed on computer software. You must remember that each software system is finite and has built-in limitations. Sometimes the vendor tells you that you can from time to time find these limitations if you search through the documentation, and occasionally you won’t know them until they occur. However, all limits can be determined through structural testing. The questions at hand are these: Do you feel competent to do it? Is it worth doing? The answers to these questions depend on the critical nature of the software and what would happen if your business is unable to continue computer processing because you reach the program limitation.

A final category of potential structural problems relates to file-handling problems. Although these do appear to be a problem, they are frequently found in computer software. Typical problems that occur are incorrect processing when the last record on a file is updated, or adding a record that will become the first record on a file. These types of problems have haunted computer programmers for years. In the PC software market, there are literally hundreds of thousands of people writing software. Some have good ideas but are not experienced programmers; thus, they fall into the age-old traps of file-manipulation problems.

As an aid in developing structural test conditions, Work Paper 18-4 lists the more common structural problem areas. You can use this checklist either to determine which types of structural test conditions you want to prepare or to check the completeness of the structural conditions included in your test matrix. Either way, it may spark you to add some additional test conditions to verify that the structure of your software performs correctly.

Table 18-4. Structural Test Condition Checklist

  

YES

NO

N/A

COMMENTS

 

Have test conditions for each of these conditions been prepared?

    

1.

Addition of a record before the first record on a file

    

2.

Addition of a record after the last record on a file

    

3.

Deletion of the first record on a file

    

4.

Deletion of the last record on a file

    

5.

Change information on the first record on a file

    

6.

Change information on the last record on a file

    

7.

Cause the program to terminate by predetermined conditions

    

8.

Accumulate a field larger than the mathematical accumulators can hold

    

9.

Verify that page counters work

    

10.

Verify that page spacing works

    

11.

Enter invalid transaction types

    

12.

Enter invalid values in fields (e.g., put alphabetic characters in a numeric field)

    

13.

Process unusual conditions (of all types)

    

14.

Test principle error conditions

    

15.

Test for out-of-control conditions (e.g., the value of records in the batch does not equal the entered batch total)

    

16.

Simulate a hardware failure forcing recovery procedures to be used

    

17.

Demonstrate recovery procedures

    

18.

Enter more records than disk storage can hold

    

19.

Enter more values than internal tables can hold

    

20.

Enter incorrect codes and transaction types

    

21.

Enter unreasonable values for transaction processing

    

22.

Violate software rules not violated by above structural test conditions

    

Modifying the Testing Process for Contracted Software

The four tasks used to test COTS software equally apply to the testing of contracted software. However, a new task is required, and Task 1 must be modified. The changes to the COTS testing process are made the same way as discussed previously in this book with regard to the seven-step testing process.

A new task must be included, and that task becomes the first task in the test process. The objective of this task is to determine the best vendor to build the contracted software. Testers therefore must evaluate the ability of various vendors to adequately test the software. The testers may actually want to include testing requirements in any requests for a proposal from a vendor. The testers may also want to consider being a part of the vendor’s testing to ensure that the interests of the acquiring organization are appropriately represented.

Contracting/outsourcing organizations focus their testing efforts on whether the software meets system specifications. As mentioned many times in this book, there is often a difference between what is specified and what is needed. For example, the specifications may not clearly articulate that ease of use for end users is required. By having the acquiring organization’s testers involved in the testing of the software at the vendor location, you can ensure that some of these important “unspecified or inadequately specified” requirements are adjusted during development. The change to Task 1 (test business fit) focuses on the completeness of the requirements. Testers may want to take two actions to ensure that the requirements as included in the proposal are adequate to meet the true needs of the customer/users of the application. These two subtasks are as follows:

  1. Organize and participate in a requirements review. The testers can follow the requirements verification process included in Step 3 of the seven-step testing process to ensure the accuracy and completeness of the requirements.

  2. Certify the requirements as testable. The testers can evaluate the requirements and make the assessment as to whether a test can validate that the requirements have or have not been correctly implemented.

Check Procedures

At the conclusion of this testing process, the tester should verify that the COTS software test procedure has been conducted effectively. The quality-control checklist for conducting the COTS software review is included as Work Paper 18-5. It is designed for Yes/No responses. Yes indicates a positive response; No responses should be investigated. The Comments column is provided to clarify No responses. The N/A column is provided for items that are not applicable to a specific COTS software test process.

Table 18-5. Off-the-Shelf Software Testing Quality Control Checklist

  

YES

NO

N/A

COMMENTS

 

Have test conditions for each of these conditions been prepared?

    

1.

Addition of a record before the first record on a file

    

2.

Addition of a record after the last record on a file

    

3.

Deletion of the first record on a file

    

4.

Deletion of the last record on a file

    

5.

Change information on the first record on a file

    

6.

Change information on the last record on a file

    

7.

Cause the program to terminate by predetermined conditions

    

8.

Accumulate a field larger than the mathematical accumulators can hold

    

9.

Verify that page counters work

    

10.

Verify that page spacing works

    

11.

Enter invalid transaction types

    

12.

Enter invalid values in fields (e.g., put alphabetic characters in a numeric field)

    

13.

Process unusual conditions (of all types)

    

14.

Test principle error conditions

    

15.

Test for out-of-control conditions (e.g., the value of records in the batch does not equal the entered batch total)

    

16.

Simulate a hardware failure forcing recovery procedures to be used

    

17.

Demonstrate recovery procedures

    

18.

Enter more records than disk storage can hold

    

19.

Enter more values than internal tables can hold

    

20.

Enter incorrect codes and transaction types

    

21.

Enter unreasonable values for transaction processing

    

22.

Violate software rules not violated by above structural test conditions

    
 

Task 1: Test Business Fit

1.

Have the business needs been adequately defined?

    

2.

Does the selected software package meet those needs?

    

3.

Have the critical success factors for the business application been defined?

    

4.

Is there a high probability that the software package under consideration will satisfy the critical success factors?

    

5.

Is the software being evaluated designed to meet this specific business need?

    

6.

Does the software under consideration push the critical success factors to their limit?

    

7.

Do you personally believe the software under consideration is the right software for you?

    

8.

Do you believe this software package will provide your business with one of the four benefits attributable to software (i.e., perform work cheaper, perform work faster, perform work more reliably, or perform tasks not currently being performed)?

    

9.

Does the business approach, and the software package, fit into your business’ long-range business plan?

    

10.

Is your business system that is being considered for computerization relatively stable in terms of requirements?

    
 

Task 2: Testing System Fit

1.

Will the selected software package operate on your computer hardware?

    

2.

Will the selected software package operate on your equipment’s operating system?

    

3.

Is the proposed software package compatible with your other computer programs (applicable programs only)?

    

4.

Can the proposed software package utilize applicable existing data files?

    

5.

Is the method in which the software operates consistent with your business cycle?

    

6.

Are you willing to have you and your personnel perform the business steps needed to make the software function correctly?

    

7.

Is the computer work flow for this area consistent with the general work flow in your business?

    

8.

Were the software demonstrations satisfactory?

    

9.

Do you believe that the software has staying power (i.e., the vendor will continue to support it as technological and business conditions change)?

    

10.

Are you pleased with the fit of this software package into your computer and systems environment?

    

Task 3: Testing People Fit

1.

Were the workers exposed to or involved in the decision to acquire a computer, and specifically the applications that affect their day-to-day job responsibilities?

    

2.

Have your and your staff’s jobs been adequately restructured after the introduction of the computer?

    

3.

Have the people involved with the computer been trained (or will they be trained) in the skills needed to perform their new job function?

    

4.

Has each worker been involved in the establishment of the procedures that he or she will use in performing day-to-day job tasks?

    

5.

Have the workers been charged with the responsibility for identifying defects in computer processing?

    

6.

Does each worker have appropriate feedback channels to all of the people involved with his or her work tasks?

    

7.

Are your people enthusiastic over the prospects of involving a computer in their work?

    

8.

Have supervisors been properly instructed in how to supervise computer staff?

    

9.

Have adequate controls been included within computer processing?

    

10.

Do you believe your people have a positive attitude about the computer and will work diligently to make it successful?

    
 

Task 4: Validate Acceptance Test Software Process

1.

Were test conditions created for all of the test methods included in the test matrix?

    

2.

Were both static and dynamic tests used as test methods?

    

3.

Have functional test conditions been prepared which are consistent with the functional requirements and critical success factors?

    

4.

Have you prepared structural test conditions which address the more common computer architectural problems and incorrect data entry?

    

5.

Has the sequence in which test conditions will be executed been determined?

    

6.

Are the test conditions prepared using the most economical source of data?

    

7.

Have the test conditions been prepared by the appropriate “stakeholder”?

    

8.

Have the test conditions been prepared in an easy-to-use format?

    

9.

Has the validity of the test process been adequately challenged?

    

10.

Do you believe that the test conditions when executed will adequately verify the functioning of the software?

    

Output

There are three potential outputs as a result of executing the COTS software test process:

  • Fully acceptable. The software meets the full needs of the organization and is acceptable for use.

  • Unacceptable. The software has such sufficient deficiencies that it is not acceptable for use.

  • Acceptable with conditions. The software does not fully meet the needs of the organization, but either lowering those expectations or incorporating alternative procedures to compensate for deficiencies makes the package usable, and thus it will be disseminated for use.

Guidelines

The following guidelines will assist you in testing COTS software:

  • Spend one day of your time learning and evaluating software, and you will gain problem-free use of that software.

  • Acquire computer software only after you have established the need for that software and can demonstrate how it will be used in your day-to-day work.

  • Instinct regarding goodness and badness should be used to help you select software.

  • Testing is not done to complicate your life, but rather to simplify it. After testing, you will operate your software from a position of strength. You will know what works, how it works, what doesn’t work, and why. After testing, you will not be intimidated by the unknown.

  • The cost of throwing away bad software is significantly less than the cost of keeping it. In addition to saving you time and money, testing saves frustration.

  • The best testing is that done by individuals who have a stake in the correct functioning of the software. These stakeholders should both prepare the test and evaluate the results.

  • If your users can run the acceptance tests successfully from the outset (procedures and training courses), they will be able to run their software successfully in conjunction with their business function.

Summary

The process outlined in this chapter is designed for testing COTS software and contracted software. It assumes that the testers will not have access to the program code; therefore, the test emphasizes usability. The test is similar in approach to acceptance testing.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset