Chapter 15. Testing Client/Server Systems

The success of a client/server program depends heavily on both the readiness of an organization to use the technology effectively and its ability to provide clients the information and capabilities that meet their needs. If an organization is not ready to move to client/server technology, it is far better to work on changing the organization to a ready status than on installing client/server technology. Preparing the organization for client/server technology is an important component of a successful program, regardless of whether it is an organization-wide client/server technology or just a small program. If the organization is ready, the client/server approach should be evaluated prior to testing the client systems.

Overview

Figure 15-1 shows a simplified client/server architecture. There are many possible variations of the client/server architecture, but for illustration purposes, this is representative.

Client/server architecture.

Figure 15-1. Client/server architecture.

In this example, application software resides on the client workstations. The application server handles processing requests. The back-end processing (typically a mainframe or super-minicomputer) handles processing such as batch transactions that are accumulated and processed together at one time on a regular basis. The important distinction to note is that application software resides on the client workstation.

Figure 15-1 shows the key distinction between workstations connected to the mainframe and workstations that contain the software used for client processing. This distinction represents a major change in processing control. For this reason, client/server testing must first evaluate the organization’s readiness to make this control change, and then evaluate the key components of the client/server system prior to conducting tests. This chapter will provide the material on assessing readiness and key components. The actual testing of client/server systems will be achieved using the seven-step testing process.

Concerns

The concerns about client/server systems reside in the area of control. The testers need to determine that adequate controls are in place to ensure accurate, complete, timely, and secure processing of client/server software systems. The testers must address the following five concerns:

  1. Organizational readiness. The culture is adequately prepared to process data using client/server technology. Readiness must be evaluated in the areas of management, client installation, and server support.

  2. Client installation. The concern is that the appropriate hardware and software will be in place to enable processing that will meet client needs.

  3. Security. There is a need for protection of both the hardware, including residence software, and the data that is processed using that hardware and software. Security must address threats from employees, outsiders, and acts of nature.

  4. Client data. Controls must be in place to ensure that everything is not lost, incorrectly processed, or processed differently on a client workstation than in other areas of the organization.

  5. Client/server standards. Standards must exist to ensure that all client workstations operate under the same set of rules.

Workbench

Figure 15-2 provides a workbench for testing client/server systems. This workbench can be used in steps as the client/server system is developed or concurrently after the client/server system has been developed. The workbench shows four steps, as well as the quality control procedures necessary to ensure that those four steps are performed correctly. The output will be any identified weaknesses uncovered during testing.

Workbench for testing client/server systems.

Figure 15-2. Workbench for testing client/server systems.

Input

The input to this test process will be the client/server system. This will include the server technology and capabilities, the communication network, and the client workstations that will be incorporated into the test. Because both the client and the server components will include software capabilities, the materials should provide a description of the client software, and any test results on that client software should be input to this test process.

Do Procedures

Testing client/server software involves the following three tasks:

  • Assess readiness

  • Assess key components

  • Assess client needs

Task 1: Assess Readiness

Client/server programs should have sponsors. Ideally, these are the directors of information technology and the impacted user management. It is the responsibility of the sponsors to ensure that the organization is ready for client/server technology. However, those charged with installing the new technology should provide the sponsor with a readiness assessment. That assessment is the objective of this chapter.

The readiness assessment proposed in this chapter is a modification of the readiness approach pioneered by Dr. Howard Rubin of Rubin and Associates. There are eight dimensions to the readiness assessment, as follows:

  1. Motivation. The level of commitment by the organization to using client/server technology to drive improvements in quality, productivity, and customer satisfaction.

  2. Investment. The amount of monies approved/budgeted for expenditures in the client/server program.

  3. Client/server skills. The ability of the client/server installation team to incorporate the client/server technology concepts and principles into the users’ programs.

  4. User education. Awareness by the individuals involved in any aspect of the client/server program in principles and concepts. These people need to understand how technology is used in the affected business processes.

  5. Culture. The willingness of the organization to innovate. In other words, is the organization willing to try new concepts and new approaches, or is it more comfortable using existing approaches and technology?

  6. Client/server support staff. The adequacy of resources to support the client/server program.

  7. Client/server aids/tools. The availability of client/server aids and tools to perform and support the client/server program.

  8. Software development process maturity. The ability of a software development process to produce high-quality (defect-free) software on a consistent basis.

The following section addresses how to measure process maturity. The other dimensions are more organization-dependent and require the judgment of a team of knowledgeable people in the organization.

Software Development Process Maturity Levels

Figure 15-3 illustrates the five software development process maturity levels, which have the following general characteristics:

  1. Ad hoc. The software development process is loosely defined, and the project leader can deviate from the process whenever he or she chooses.

  2. Repeatable. The organization has achieved a stable process with a repeatable level of quality by initiating rigorous requirements, effective project management, cost, schedules, and change control.

  3. Consistent. The organization has defined the process as a basis for consistent implementation. Developers can depend on the quality of the deliverables.

  4. Measured. The organization has initiated comprehensive process measurements and analysis. This is when the most significant quality improvements begin.

  5. Optimized. The organization now has a foundation for continuing improvement and optimization of the process.

Software development process maturity levels.

Figure 15-3. Software development process maturity levels.

These levels have been selected because they:

  • Reasonably represent the actual historical phases of evolutionary improvement of real software organizations.

  • Represent a measure of improvement that is reasonable to achieve from the prior level.

  • Suggest interim improvement goals and progress measurements.

  • Make obvious a set of immediate improvement priorities once an organization’s status in this framework is known.

Although there are many other elements to these maturity level transitions, the primary objective is to achieve a controlled and measured process as the foundation for continuing improvement.

This software development process maturity structure is intended for use with an assessment methodology and a management system. Assessment helps an organization identify its specific maturity status, and the management system establishes a structure for implementing the priority improvement actions. Once its position in this maturity structure is defined, the organization can concentrate on those items that will help it advance to the next level. When, for example, a software organization does not have an effective project-planning system, it may be difficult or even impossible to introduce advanced methods and technology. Poor project planning generally leads to unrealistic schedules, inadequate resources, and frequent crises. In such circumstances, new methods are usually ignored and priority is given to coding and testing.

The Ad Hoc Process (Level 1)

The ad hoc process level is unpredictable and often very chaotic. At this stage, the organization typically operates without formalized procedures, cost estimates, and project plans. Tools are neither well integrated with the process nor uniformly applied. Change control is lax, and there is little senior management exposure or understanding of the problems and issues. Because many problems are deferred or even forgotten, software installation and maintenance often present serious problems.

Although organizations at this level may have formal procedures for planning and tracking their work, there is no management mechanism to ensure that they are used. The best test is to observe how such an organization behaves in a crisis. If it abandons established procedures and essentially reverts to coding and testing, it is likely to be at the ad hoc process level. After all, if the techniques and methods are appropriate, then they should be used in a crisis; if they are not appropriate in a crisis, they should not be used at all.

One key reason why organizations behave in this fashion is that they have not experienced the benefits of a mature process and, thus, do not understand the consequences of their chaotic behavior. Because many effective software actions (such as design and code inspections or test data analysis) do not appear to directly support shipping the product, they seem expendable.

Driving an automobile is an appropriate analogy. Few drivers with any experience will continue driving for very long when the engine warning light comes on, regardless of their rush. Similarly, most drivers starting on a new journey will, regardless of their hurry, pause to consult a map. They have learned the difference between speed and progress. Without a sound plan and a thoughtful analysis of the problems, management may be unaware of ineffective software development.

Organizations at the ad hoc process level can improve their performance by instituting basic project controls. The most important are project management, management oversight, quality assurance, and change control. The fundamental role of the project management system is to ensure effective control of commitments. This requires adequate preparation, clear responsibility, a public declaration, and a dedication to performance. For software, project management starts with an understanding of the job’s magnitude. In any but the simplest projects, a plan must then be developed to determine the best schedule and the anticipated resources required. In the absence of such an orderly plan, no commitment can be better than an educated guess.

A suitably disciplined software development organization must have senior management oversight. This includes review and approval of all major development plans prior to their official commitment. Also, a quarterly review should be conducted of facility-wide process compliance, installed quality performance, schedule tracking, cost trends, computing service, and quality and productivity goals by project. The lack of such reviews typically results in uneven and generally inadequate implementation of the process as well as frequent over-commitments and cost surprises.

A quality assurance group is charged with assuring management that software work is done the way it is supposed to be done. To be effective, the assurance organization must have an independent reporting line to senior management and sufficient resources to monitor performance of all key planning, implementation, and verification activities. This generally requires an organization of about 3 percent to 6 percent the size of the software organization.

Change control for software is fundamental to business and financial control as well as to technical stability. To develop quality software on a predictable schedule, requirements must be established and maintained with reasonable stability throughout the development cycle. While requirements changes are often needed, historical evidence demonstrates that many can be deferred and incorporated later. Design and code changes must be made to correct problems found in development and testing, but these must be carefully introduced. If changes are not controlled, then orderly design, implementation, and testing are impossible and no quality plan can be effective.

The Repeatable Process (Level 2)

The repeatable process has one important strength that the ad hoc process does not: It provides control over the way the organization establishes its plans and commitments. This control provides such an improvement over the ad hoc process level that the people in the organization tend to believe they have mastered the software problem. They have achieved a degree of statistical control through learning to make and meet their estimates and plans. This strength stems from using work processes that, when followed, produce consistent results. Organizations at the repeatable process level thus face major risks when they are presented with new challenges. The following are some examples of the changes that represent the highest risk at this level:

  • Unless they are introduced with great care, new tools and methods can negatively affect the testing process.

  • When the organization must develop a new kind of product, it is entering new territory. For example, a software group that has experience developing compilers will likely have design, scheduling, and estimating problems when assigned to write a real-time control program. Similarly, a group that has developed small, self-contained programs will not understand the interface and integration issues involved in large-scale projects. These changes may eliminate the lessons learned through experience.

  • Major organizational changes can also be highly disruptive. At the repeatable process level, a new manager has no orderly basis for understanding the organization’s operation, and new team members must learn the ropes through word of mouth.

The key actions required to advance from the repeatable process to the next stage, the consistent process, are to establish a process group, establish a development process architecture, and introduce a family of software engineering methods and technologies.

The procedure for establishing a software development process architecture, or development life cycle, that describes the technical and management activities required for proper execution of the development process must be attuned to the specific needs of the organization. It will vary depending on the size and importance of the project as well as the technical nature of the work itself. The architecture is a structural decomposition of the development cycle into tasks, each of which has a defined set of prerequisites, functional descriptions, verification procedures, and task completion specifications. The decomposition continues until each defined task is performed by an individual or single management unit.

If they are not already in place, introduce a family of software engineering methods and technologies. These include design and code inspections, formal design methods, library control systems, and comprehensive testing methods. Prototyping should also be considered, together with the adoption of modern implementation languages.

The Consistent Process (Level 3)

With the consistent process, the organization has achieved the foundation for major and continuing progress. For example, the software teams, when faced with a crisis, will likely continue to use the process that has been defined. The foundation has now been established for examining the process and deciding how to improve it. As powerful as the process is, it is still only qualitative; there is little data to indicate how much was accomplished or how effective the process is. There is considerable debate about the value of software process measurements and the best ones to use. This uncertainty generally stems from a lack of process definition and the consequent confusion about the specific items to be measured. With a consistent process, measurements can be focused on specific tasks. The process architecture is thus an essential prerequisite to effective measurement.

The following key steps are required to advance from the consistent process level to the measured process level:

  1. Establish a minimum basic set of process measurements to identify the quality and cost parameters of each process step. The objective is to quantify the relative costs and benefits of each major process activity.

  2. Establish a process database and the resources to manage and maintain it. Cost and productivity data should be maintained centrally to guard against loss, to make it available for all projects, and to facilitate process quality and productivity analysis.

  3. Provide sufficient process resources to gather and maintain this process data and to advise project members on its use. Assign skilled professionals to monitor the quality of the data before entry in the database and to provide guidance on analysis methods and interpretation.

  4. Assess the relative quality of each product and inform management where quality targets are not being met. An independent quality assurance group should assess the quality actions of each project and track its progress against its quality plan. When this progress is compared with the historical experience on similar projects, an informed assessment can generally be made.

The Measured Process (Level 4)

In advancing from the ad hoc process through the repeatable and consistent processes to the measured process, software organizations should expect to make substantial quality improvements. The greatest potential problem with the measured process is the cost of gathering data. There are an enormous number of potentially valuable measures of the software process, but such data is expensive to gather and to maintain.

Approach data gathering with care, therefore, and precisely define each piece of data in advance. Productivity data is essentially meaningless unless explicitly defined. For example, the simple measure of lines of source code per expended development month can vary by 100 times or more, depending on the interpretation of the parameters. Lines of code need to be defined to get consistent counts. For example, if one line brings in a routine with one hundred lines of code, should that be counted as one line of code or one hundred lines of code?

When different groups gather data but do not use identical definitions, the results are not comparable, even if it makes sense to compare them. The tendency with such data is to use it to compare several groups and to criticize those with the lowest ranking. This is an unfortunate misapplication of process data. It is rare that two projects are comparable by any simple measures. The variations in task complexity caused by different product types can exceed five to one. Similarly, the cost per line of code of small modifications is often two to three times that for new programs. The degree of requirements change can make an enormous difference, as can the design status of the base program in the case of enhancements.

Process data must not be used to compare projects or individuals. Its purpose is to illuminate the product being developed and to provide an informed basis for improving the process. When such data is used by management to evaluate individuals or teams, the reliability of the data itself will deteriorate.

The two fundamental requirements for advancing from the measured process to the next level are:

  1. Support automatic gathering of process data. All data is subject to error and omission, some data cannot be gathered by hand, and the accuracy of manually gathered data is often poor.

  2. Use process data both to analyze and to modify the process to prevent problems and improve efficiency.

The Optimized Process (Level 5)

In varying degrees, process optimization goes on at all levels of process maturity. With the step from the measured to the optimized process, however, there is a paradigm shift. Up to this point, software development managers have largely focused on their products and will typically gather and analyze only data that directly relates to product improvement. In the optimized process, the data is available to tune the process itself. With a little experience, management will soon see that process optimization can produce major quality and productivity benefits.

For example, many types of errors can be identified and fixed far more economically by design or code inspections than by testing. Unfortunately, there is only limited published data available on the costs of finding and fixing defects. However, from experience, I have developed a useful guideline: It takes about 1 to 4 working hours to find and fix a bug through inspections, and about 15 to 20 working hours to find and fix a bug in function or system testing. To the extent that organizations find that these numbers apply to their situations, they should consider placing less reliance on testing as their primary way to find and fix bugs.

However, some kinds of errors are either uneconomical to detect or almost impossible to find except by machine. Examples are errors involving spelling and syntax, interfaces, performance, human factors, and error recovery. It would be unwise to eliminate testing completely because it provides a useful check against human frailties.

The data that is available with the optimized process provides a new perspective on testing. For most projects, a little analysis shows that two distinct activities are involved: removing defects and assessing program quality. To reduce the cost of removing defects, testing techniques such as inspections, desk debugging, and code analyzers should be emphasized. The role of functional and system testing should then be changed to one of gathering quality data on the programs. This involves studying each bug to see if it is an isolated problem or if it indicates design problems that require more comprehensive analysis.

With the optimized process, the organization has the means to identify the weakest elements of the process and to fix them. At this point in process improvement, data is available to justify the application of technology to various critical tasks, and numerical evidence is available on the effectiveness with which the process has been applied to any given product. An organization should then no longer need reams of paper to describe what is happening because simple yield curves and statistical plots can provide clear and concise indicators. It would then be possible to ensure the process and hence have confidence in the quality of the resulting products.

Conducting the Client/Server Readiness Assessment

To perform the client/server readiness assessment, you need to evaluate your organization in the eight readiness dimensions, as described in Task 1. You may want to assemble a representative group of individuals from your organization to develop the assessment and use Work Paper 15-1 to assist them in performing the assessment.

Table 15-1. Client/Server Readiness Assessment

 

YES

NO

N/A

COMMENTS

Installing Client System

1.

Has a personal computer installation package been developed? (If this item has a No response, the remaining items in the checklist can be skipped.)

    

2.

Is the installation procedure available to any personal computer user in the organization?

    

3.

Does the personal computer installation program provide for locating the personal computer?

    

4.

Does the program provide for surge protection for power supplies?

    

5.

Does the installation program provide for necessary physical protection?

    

6.

Does the installation program identify needed supplies and accessories?

    

7.

Does the installation program provide for acquiring needed computer media?

    

8.

Does the installation program address storing computer media?

    

9.

Does the installation program address storage area for printer supplies, books, and so on?

    

10.

Does the installation program address noise from printers, including providing mats and acoustical covers?

    

11.

Does the installation program address converting data from paper to computer media?

    

12.

Does the installation program arrange for off-site storage area?

    

13.

Does the installation program arrange for personal computer servicing?

    

14.

Does the installation program arrange for a backup processing facility?

    

15.

Does the installation program arrange for consulting services if needed?

    

16.

Are users taught how to install personal computers through classes or step-by-step procedures?

    

17.

Do installation procedures take into account specific organizational requirements, such as accounting for computer usage?

    

18.

Is the installation process customized depending on the phase of maturity of personal computer usage?

    

19.

Has a means been established to measure the success of the installation process?

    

20.

Have potential installation impediments been identified and counterstrategies adopted where appropriate?

    

21.

Has the organization determined their strategy in the event that the installation of standard personal computer is unsatisfactory to the user?

    

22.

Has the needed client software been supplied?

    

23.

Has the needed client software been tested?

    

Client/Server Security

1.

Has the organization issued a security policy for personal computers?

    

2.

Have standards and procedures been developed to ensure effective compliance with that policy?

    

3.

Are procedures established to record personal computer violations?

    

4.

Have the risks associated with personal computers been identified?

    

5.

Has the magnitude of each risk been identified?

    

6.

Has the personal security group identified the type of available countermeasures for the personal computer security threats?

    

7.

Has an awareness program been developed to encourage support of security in a personal computer environment?

    

8.

Have training programs been developed for personal computer users in security procedures and methods?

    

9.

Does the audit function conduct regular audits to evaluate personal computer security and identify potential vulnerabilities in that security?

    

10.

Does senior management take an active role in supporting the personal computer security program?

    

11.

Have security procedures been developed for operators of personal computers?

    

12.

Are the security programs at the central computer site and coordinated?

    

13.

Has one individual at the central site been appointed responsible for overseeing security of the personal computer program?

    

14.

Have operating managers/personal computer users been made responsible for security over their personal computer facilities?

    

15.

Is the effectiveness of the total personal computer security program regularly evaluated?

    

16.

Has one individual been appointed responsible for the security of personal computers for the organization?

    

Client Data

1.

Has a policy been established on sharing data with users?

    

2.

Is data recognized as a corporate resource as opposed to the property of a single department or individual?

    

3.

Have the requirements for sharing been defined?

    

4.

Have profiles been established indicating what user wants which data?

    

5.

Have the individuals responsible for that data approved use by the proposed users of the data?

    

6.

Has a usage profile been developed that identifies whether data is to be uploaded and downloaded?

    

7.

Has the user use profile been defined to the appropriate levels to provide the needed security?

    

8.

Have security standards been established for protecting data at personal computer sites?

    

9.

Has the personal computer user been made accountable and responsible for the security of the data at the personal computer site?

    

10.

Does the user’s manager share this security responsibility?

    

11.

Have adequate safeguards at the central site been established to prevent unauthorized access to data?

    

12.

Have adequate safeguards at the central site been established to prevent unauthorized modification to data?

    

13.

Are logs maintained that keep records of what data is transferred to and from personal computer sites?

    

14.

Do the communication programs provide for error handling?

    

15.

Are the remote users trained in accessing and protecting corporate data?

    

16.

Have the appropriate facilities been developed to reformat files?

    

17.

Are appropriate safeguards taken to protect diskettes at remote sites containing corporate data?

    

18.

Is the security protection required for data at the remote site known to the personal computer user?

    

19.

Are violations of data security/control procedures recorded?

    

20.

Is someone in the organization accountable for ensuring that data is made available to those users who need it? (In many organizations this individual is referred to as the data administrator.)

    

Client/Server Standards

1.

Are standards based on a hierarchy of policies, standards, procedures, and guidelines?

    

2.

Has the organization issued a personal computer policy?

    

3.

Have the standards been issued to evaluate compliance with the organization’s personal computer policy?

    

4.

Have policies been developed for users of personal computers that are supportive of the organization’s overall personal computer policy?

    

5.

Have personal computer policies been developed for the following areas:

  1. Continuity of processing

  2. Reconstruction

  3. Accuracy

  4. Security

  5. Compliance

  6. File integrity

  7. Data

    

6.

Are all standards tied directly to personal computer policies?

    

7.

Has the concept of ownership been employed in the development of standards?

    

8.

Can the benefit of each standard be demonstrated to the users of the standards?

    

9.

Are the standards written in playscript?

    

10.

Have quality control self-assessment tools been issued to personal computer users to help them comply with the standards?

    

11.

Has a standards notebook been prepared?

    

12.

Is the standards notebook divided by area of responsibility?

    

13.

Are the standards explained to users in the form of a training class or users-group meeting?

    

14.

Does a representative group of users have an opportunity to review and comment on standards before they are issued?

    

15.

Are guidelines issued where appropriate?

    

16.

Is the standards program consistent with the objectives of the phase of maturity of the personal computer in the organization?

    

You should rate each readiness dimension in one of the following four categories:

  1. High. The readiness assessment team is satisfied that the readiness in this dimension will not inhibit the successful implementation of client/server technology.

  2. Medium. The readiness assessment team believes that the readiness in this dimension will not be a significant factor in causing the client/server technology to fail. While additional readiness would be desirable, it is not considered an inhibitor to installing client/server technology.

  3. Low. While there is some readiness for client/server technology, there are serious reservations that the readiness in this dimension will have a negative impact on the implementation of client/server technology.

  4. None. No readiness at all in this area. Without at least low readiness in all eight dimensions, the probability of client/server technology being successful is extremely low.

Work Paper 15-2 can be used to record the results of the client/server technology readiness assessment.

Table 15-2. Client/Server Readiness Results

#

READINESS DIMENSION

READINESS RATING

NAME

DESCRIPTION

HIGH

MEDIUM

LOW

NONE

       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       
       

Preparing a Client/Server Readiness Footprint Chart

A footprint chart is a means of graphically illustrating readiness. The end result will be a footprint that indicates the degree of readiness. The chart is completed by performing the following two steps:

  1. Record the point on the dimension line that corresponds to the readiness rating provided on Work Paper 15-2. For example, if the motivation dimension was scored medium, place a dot on the medium circle where it intersects with the motivation dimension line.

  2. Connect all of the points and color the inside of the readiness lines connecting the eight readiness points.

The shaded area of the footprint chart is the client/server readiness footprint. It will graphically show whether your organization is ready for client/server technology. Use Work Paper 15-3 for your client/server readiness footprint chart.

Client/Server Readiness Footprint Chart

Figure 15-3. Client/Server Readiness Footprint Chart

Task 2: Assess Key Components

Experience shows that if the key or driving components of technology are in place and working, they will provide most of the assurance necessary for effective processing. Four key components are identified for client/server technology:

  1. Client installations are done correctly.

  2. Adequate security is provided for the client/server system.

  3. Client data is adequately protected.

  4. Client/server standards are in place and working.

These four key components need to be assessed prior to conducting the detailed testing. Experience has shown that if these key components are not in place and working, the correctness and accuracy of ongoing processing may be degraded even though the software works effectively.

A detailed checklist is provided to assist testers in evaluating these four components. The checklists are used most effectively if they are answered after an assessment of the four key areas is completed. The questions are designed to be answered by the testers and not to be asked of the people developing the key component areas.

Task 3: Assess Client Needs

Assessing client needs in a client/server system is a challenge because of the number of clients. In some organizations, clients will be homogenous, whereas in other organizations, they will be diverse and not in direct communication with one another. The tester’s challenge is that a client/server system that meets the needs of some clients might not meet the needs of all clients. Thus, testers need some assurance that the client/server system incorporates the needs of the all clients.

The tester has two major challenges in evaluating the needs of the clients:

  • Can the system do what the client needs?

  • Can the client produce results consistent with other clients and other systems?

The tester has two options in validating whether the client/server system will meet the processing needs of the clients. The first is to test how the developers of the system documented the clients’ needs. Two distinct methods might be used. The first is that the client/server system is developed for the clients. In other words, the clients were not the driving force in developing this system, but rather management determined the type of processing the clients needed to perform their job effectively and efficiently. The second method is when the client/server system is built based on the requests of the clients.

If the clients have specified their needs, it would be beneficial to conduct a review of the documented requirements for accuracy and completeness. The methods for conducting reviews were described in Chapter 9. The review process should be conducted by clients serving on the review committee.

If the system was developed for the clients, then the testers might want to validate that that system will meet the true needs of the clients. In many client/server systems, the clients are clerical in job function but most likely knowledgeable in what is needed to meet the needs of their customers/users. Through visiting a representative number of clients should prove beneficial to testers in determining whether or not the system will, in fact, meet the true needs of the clients.

Ensuring that the deliverables produced by the clients will be consistent with deliverables from other clients and other systems is a more complex task. However, it is an extremely important task. For example, in one organization, one or more clients produced accounting data that was inconsistent with the data produced by the accounting department. This was because the client did not understand the cut-offs used by the accounting department. The cut-off might mean that an order placed in November but completed in December should not be considered in November sales.

The following are some of the client/server characteristics that testers should evaluate to determine whether they meet the client needs:

  • Data formats. The format in which the client receives data is a format that is usable by the client. This is particularly important when the client will be using other software systems to ensure that the data formats are usable by those other systems.

  • Completeness of data. Clients may need more data to correctly perform the processing desired. For example, in our accounting cutoff discussion, it is important that the client would know in which accounting period the data belongs. In addition, there may be data needed by the user that is not provided by the client/server system.

  • Understandable documentation. The client needs to have documentation, both written and onscreen, that is readily understandable. For example, abbreviations should be clearly defined in the accompanying written documentation.

  • Written for the competency of the client’s users. Many software developers develop software for people at higher competency levels than the typical users of their systems. This can occur if the system is complex to use, or assumes knowledge and background information not typical of the clients that will use the system.

  • Easy to use. Surveys have shown that only a small portion of most software systems is used because of the difficulty in learning how to use all components of the system. If a processing component is not easy to use, there is a high probability that the client will not use those parts of the system correctly.

  • Sufficient help routines. Periodically, clients will be involved in a processing situation for which they do not know how to respond. If the client/server system has “help” routines available, the probability that the client can work through those difficult situations is increased.

Check Procedures

Work Paper 15-4 is a quality control checklist for this client/server test process. It is designed so that Yes responses indicate good test practices, and No responses warrant additional investigation. A Comments column is provided to explain No responses and to record results of investigation. The N/A column is used when the checklist item is not applicable to the test situation.

Table 15-4 . Client/Server Systems Quality Control Checklist

 

YES

NO

N/A

COMMENTS

1.

Does the test team in total have team members who understand client/server technology?

    

2.

Have the test team members acquired knowledge of client/server system to be tested?

    

3.

Has the readiness of the organization who installs client/server technology been evaluated?

    

4.

If the organization is not deemed ready to install client/server technology, have the appropriate steps been taken to achieve a readiness status prior to installing the client/server system?

    

5.

Has an adequate plan been developed and implemented to ensure proper installation of client technology?

    

6.

Are the communication lines adequate to enable efficient client/server processing?

    

7.

Has the server component of the system been developed adequately so that it can support client processing?

    

8.

Are security procedures adequate to protect client hardware and software?

    

9.

Are security procedures adequate to prevent processing compromise by employees, external personnel, and acts of nature?

    

10.

Are procedures in place to adequately protect client data?

    

11.

Are procedures in place to ensure that clients can only access data for which they have been authorized?

    

12.

Are standards in place for managing client/server systems?

    

13.

Does management support and enforce those standards?

    

Output

The only output from this system is the test report indicating what works and what does not work. The report should also contain recommendations by the test team for improvements, where appropriate.

Guidelines

The testing described in this chapter is best performed in two phases. The first phase—Tasks 1, 2, and 3—is best executed during the development of the client/server system. Task 4 can then be used after the client/server system has been developed and is ready for operational testing.

Summary

This chapter provided a process for testing client/server systems. The materials contained in this chapter are designed to supplement the seven-step process described in Part Three of this book. Readiness assessment and key component assessment (Tasks 1 and 2) supplement the seven-step process (specifically Step 2, test planning). The next chapter discusses a specialized test process for systems developed using the rapid application development method.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset