Chapter 22. Testing Web-Based Systems

This chapter focuses on the unique characteristics of web-based testing. Testing can use the same seven-step process described in Part Three of this book. This chapter focuses on determining whether web-based risk should be included in the test plan, which types of web-based testing should be used, and selecting the appropriate web-based test tools for the test execution phase.

Web-based systems are those systems that use the Internet, intranets, and extranets. The Internet is a worldwide collection of interconnected networks. An intranet is a private network inside a company using web-based applications, but for use only within an organization. An extranet is a private network that allows external access to customers and suppliers using web-based applications.

Overview

Web-based architecture is an extension of client/server architecture. The following section describes the difference between client/server architecture and web-based architecture.

In a client/server architecture, as discussed in Chapter 15, application software resides on the client workstations. The application server handles processing requests. The back-end processing (typically a mainframe or super-minicomputer) handles processing such as batch transactions that are accumulated and processed together at one time on a regular basis. The important distinction to note is that application software resides on the client workstation.

For web-based systems, the browsers reside on client workstations. These client workstations are networked to a web server, either through a remote connection or through a network such as a local area network (LAN) or wide area network (WAN).

As the web server receives and processes requests from the client workstation, requests may be sent to the application server to perform actions such as data queries, electronic commerce transactions, and so forth.

The back-end processing works in the background to perform batch processing and handle high-volume transactions. The back-end processing can also interface with transactions to other systems in the organization. For example, when an online banking transaction is processed over the Internet, the transaction is eventually updated to the customer’s account and shown on a statement in a back-end process.

Concerns

Testers should have the following concerns when conducting web-based testing:

  • Browser compatibility. Testers should validate consistent application performance on a variety of browser types and configurations.

  • Functional correctness. Testers should validate that the application functions correctly. This includes validating links, calculations, displays of information, and navigation.

  • Integration. Testers should validate the integration between browsers and servers, applications and data, and hardware and software.

  • Usability. Testers should validate the overall usability of a web page or a web application, including appearance, clarity, and navigation.

  • Security. Testers should validate the adequacy and correctness of security controls, including access control and authorizations.

  • Performance. Testers should validate the performance of the web application under load.

  • Verification of code. Testers should validate that the code used in building the web application (HTML, Java, and so on) has been used in a correct manner. For example, no nonstandard coding practices should be used that would cause an application to function incorrectly in some environments.

Workbench

Figure 22-1 illustrates the workbench for web-based testing. The input to the workbench is the hardware and software that will be incorporated in the web-based system to be tested. The first three tasks of the workbench are primarily involved in web-based test planning. The fourth task is traditional software testing. The output from the workbench is to report what works and what does not work, as well as any concerns over the use of web technology.

Workbench for web-based testing.

Figure 22-1. Workbench for web-based testing.

Input

The input to this test process is the description of web-based technology used in the systems being tested. The following list shows how web-based systems differ from other technologies. The description of the web-based systems under testing should address these differences:

  • Uncontrolled user interfaces (browsers). Because of the variety of web browsers available, a web page must be functional on those browsers that you expect to be used in accessing your web applications. Furthermore, as new releases of browsers emerge, your web applications will need to keep up with compatibility issues.

  • Complex distributed systems. In addition to being complex and distributed, web-based applications are also remotely accessed, which adds even more concerns to the testing effort. While some applications may be less complex than others, it is safe to say that the trend in web applications is to become more complex rather than less.

  • Security issues. Protection is needed from unauthorized access that can corrupt applications and/or data. Another security risk is that of access to confidential information.

  • Multiple layers in architecture. These layers of architecture include application servers, web servers, back-end processing, data warehouses, and secure servers for electronic commerce.

  • New terminology and skill sets. Just as in making the transition to client/server, new skills are needed to develop, test, and use web-based technology effectively.

  • Object-oriented. Object-oriented languages such as Java are the mainstay of web development.

Do Procedures

This section discusses the four tasks testers should perform when testing a web-based system.

Task 1: Select Web-Based Risks to Include in the Test Plan

Risks are important to understand because they reveal what to test. Each risk points to an entire area of potential tests. In addition, the degree of testing should be based on risk. The risks are briefly listed here, followed by a more detailed description of the concerns associated with each risk:

  • Security. As we have already seen, one of the major risks of Internet applications is security. It is very important to validate that the application and data are protected from outside intrusion or unauthorized access.

  • Performance. An Internet application with poor performance will be judged hard to use. Web sites that are slow in response will not retain the visitors they attract and will be frustrating to the people who try to use them.

  • Correctness. Obviously, correctness is a very important area of risk. It is essential that the functionality and information obtained from web-based applications are correct.

  • Compatibility (configuration). A web-based application must be able to work correctly on a wide variety of system configurations including browsers, operating systems, and hardware systems. All of these are out of the control of the developer of the application.

  • Reliability. An Internet application must have a high level of availability and the information provided from the application must be consistent and reliable to the user.

  • Data integrity. The data entered into an Internet application must be validated to ensure its correctness. In addition, measures must be taken to ensure the data stays correct after it is entered into the application.

  • Usability. The application must be easy to use. This includes things like navigation, clarity, and understandability of the information provided by the application.

  • Recoverability. In the event of an outage, the system must be recoverable.

    This includes recovering lost transactions, recovering from loss of communications, and ensuring that proper backups are made as a part of regular systems maintenance.

Security Concerns

The following are some of the detailed security risks that need to be addressed in an Internet application test plan:

  • External intrusion. Perhaps the most obvious security concern is that of protecting the system from external intrusion. This can include intrusion from people who are trying to gain access to sensitive information, people who are trying to intentionally sabotage information, and people who are trying to intentionally sabotage applications.

  • Protection of secured transactions. Another major area of concern is that of protecting transactions over the Internet. This is especially true in dealing with electronic commerce transactions. Many consumers are reluctant to give credit card information over the Internet for fear that information will be intercepted and used for fraudulent purposes.

  • Viruses. The Internet has become a vehicle for propagating tens of thousands of new viruses. These viruses are contained in downloaded files that can be distributed from web sites and e-mail.

  • Access control. Access control means that only authorized users have security access to a particular application or portion of an application. This access is typically granted with a user ID and password.

  • Authorization levels. Authorization levels refer to the ability of the application to restrict certain transactions only to those users who have a certain level of authorization.

Performance Concerns

System performance can make or break an Internet application. Several types of performance testing can be performed to validate an application’s performance levels. Performance testing is a very precise kind of testing and requires the use of automated tools for testing to be accomplished with any level of accuracy and efficiency. Unfortunately, manual approaches to performance testing fall short of the accuracy needed to correctly gauge an application’s performance and may lead to a false level of confidence in the test.

Typically, the most common kind of performance testing for Internet applications is load testing. Load testing seeks to determine how the application performs under expected and greater-than-expected levels of activity. Application load can be assessed in a variety of ways:

  • Concurrency. Concurrency testing seeks to validate the performance of an application with a given number of concurrent interactive users.

  • Stress. Stress testing seeks to validate the performance of an application when certain aspects of the application are stretched to their maximum limits. This can include maximum number of users, and can also include maximizing table values and data values.

  • Throughput. Throughput testing seeks to validate the number of transactions to be processed by an application during a given period of time. For example, one type of throughput test might be to attempt to process 100,000 transactions in one hour.

Correctness Concerns

Of course, one of the most important areas of concern is that the application functions correctly. This can include not only the functionality of buttons and “behind the scenes” instructions but also calculations and navigation of the application.

  • Functionality. Functional correctness means that the application performs its intended tasks as defined by a stated set of specifications. The specifications of an application are the benchmark of what the application should do. Functional correctness is determined by performing a functional test. A functional test is performed in a cause-effect manner. In other words, if a particular action is taken, a particular result should be seen.

  • Calculations. Many web-based applications include calculations. These calculations must be tested to ensure correctness and to find defects.

  • Navigation. Navigation correctness can include testing links, buttons, and general navigation through a web site or web-based application.

Compatibility Concerns

Compatibility is the capability of the application to perform correctly in a variety of expected environments. Two of the major variables that affect web-based applications are operating systems and browsers.

Currently, operating systems (or platforms) and how they support the browser of your choice will affect the appearance and functionality of a web application. This requires that you test your web-based applications as accessed on a variety of common platforms and browsers. You should be able to define the most commonly used platforms by reviewing the access statistics of your web site.

Browser Configuration

Each browser has configuration options that affect how it displays information. These options vary from browser to browser and are too diverse to address in this text. The most reasonable testing strategy is to define optimal configurations on the most standard kinds of browsers and test based on those configurations.

Some of the main things to consider from a hardware compatibility standpoint are the following:

  • Monitors, video cards, and video RAM. If you have a web site that requires a high standard of video capability, some users will not be able to view your site, or will not have a positive experience at your site.

  • Audio, video, and multimedia support. Once again, you need to verify that a web application is designed to provide a level of multimedia support that a typical end-user will need to be able to access your site. If software plug-ins are required, you should provide links on your page to facilitate the user in downloading the plug-in.

  • Memory (RAM) and hard drive space. RAM is very important for increasing the performance of a browser on a particular platform. Browsers also make heavy use of caching, which is how a browser stores graphics and other information on a user’s hard drive. This helps speed the display of web pages the next time the user visits a web site.

  • Bandwidth access. Many corporate users have high-speed Internet access based on T-1 or T-3 networks, or ISDN telephone lines.

Browser differences can make a web application appear differently to different people. These differences may appear in any of the following areas (this is not intended to be an exhaustive list; these are merely the more common areas of browser differences):

  • Print handling. To make printing faster and easier, some pages add a link or button to print a browser-friendly version of the page being viewed.

  • Reload. Some browser configurations will not automatically display updated pages if a version of the page still exists in the cache. Some pages indicate if the user should reload the page.

  • Navigation. Browsers vary in the ease of navigation, especially when it comes to visiting pages previously visited during a session. A web application developer may need to add navigational aids to the web pages to facilitate ease of navigation.

  • Graphics filters. Browsers may handle images differently, depending on the graphic filters supported by the browser. In fact, some browsers may not show an image at all. By standardizing on JPG and GIF images you should be able to eliminate this concern.

  • Caching. How the cache is configured (size, etc.) will have an impact on the performance of a browser to view information.

  • Dynamic page generation. This includes how a user receives information from pages that change based on input. Examples of dynamic page generation include:

    • Shopping cart applications

    • Data search applications

    • Calculation forms

  • File downloads. Movement of data from remote data storage for user processing.

  • E-mail functions. Because e-mail activities can consume excessive processing time, guidelines should be developed.

Each browser has its own interface and functionality for e-mail. Many people use separate e-mail applications outside of a browser, but for those who don’t, this can be a concern for users when it comes to compatibility.

Reliability Concerns

Because of the continuous uptime requirements for most Internet applications, reliability is a key concern. However, reliability can be considered in more than system availability; it can also be expressed in terms of the reliability of the information obtained from the application:

  • Consistently correct results

  • Server and system availability

Data Integrity Concerns

Not only must the data be validated when it is entered into the web application, but it must also be safeguarded to ensure the data stays correct:

  • Ensuring only correct data is accepted. This can be achieved by validating the data at the page level when it is entered by a user.

  • Ensuring data stays in a correct state. This can be achieved by procedures to back up data and ensure that controlled methods are used to update data.

Usability Concerns

If users or customers find an Internet application hard to use, they will likely go to a competitor’s site. Usability can be validated and usually involves the following:

  • Ensuring the application is easy to use and understand

  • Ensuring that users know how to interpret and use the information delivered from the application

  • Ensuring that navigation is clear and correct

Recoverability Concerns

Internet applications are more prone to outages than systems that are more centralized or located on reliable, controlled networks. The remote accessibility of Internet applications makes the following recoverability concerns important:

  • Lost connections

    • Timeouts

    • Dropped lines

  • Client system crashes

  • Server system crashes or other application problems

Work Paper 22-1 is designed to determine which web-based risks need to be addressed in the test plan, and how those risks will be included in the test plan. The use of this work paper should be associated with a “brainstorming session” by the web-based test team. The work paper should be completed once the web-based test team has reached consensus regarding inclusion of risks in the test plan.

Table 22-1. Web-Based Risks to Include in the Test Plan

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Web-based Risks

This field lists the eight web-based risks described in this chapter. The description implies that “lack of” is associated with the risk.

Include in Test

The web-based testing should determine whether any or all of the eight identified web-based risks need to be addressed in the test plan. A check in the Yes column indicates that it should be included in the plan, and a check in the No column indicates it is not needed in the plan.

How risk will be included in the web-based test plan

This column is designed to be used in two ways. If the risk is not to be included in test plan, a justification as to why not could be included in this column. The second use is the test team’s preliminary thoughts on how this risk will be included in the test plan. The description might involve the types of tests, the types of tools, and/or the approach to be used in testing.

    

WEB-BASED RISKS (LACK OF)

INCLUDE IN TEST

HOW RISK WILL BE INCLUDED IN WEB-BASED TEST PLAN

YES

NO

Security

   

Performance

   

Correctness

   

Compatibility (Configuration)

   

Reliability

   

Data Integrity

   

Usability

   

Recoverability

   

Task 2: Select Web-Based Tests

Now that you have selected the risks to be addressed in the web-based applications, you must examine the types and phases of testing needed to validate them.

Unit or Component

This includes testing at the object, component, page, or applet level. Unit testing is the lowest level of testing in terms of detail. During unit testing, the structure of languages, such as HTML and Java, can be verified. Edits and calculations can also be tested at the unit level.

Integration

Integration is the passing of data and/or control between units or components, which includes testing navigation (i.e., the paths the test data will follow). In web-based applications, this includes testing links, data exchanges, and flow of control in an application.

System

System testing examines the web application as a whole and with other systems. The classic definition of system testing is that it validates that a computing system functions according to written requirements and specifications. This is also true in web-based applications. The differences apply in how the system is defined. System testing typically includes hardware, software, data, procedures, and people.

In corporate web-based applications, a system might interface with Internet web pages, data warehouses, back-end processing systems, and reporting systems.

User Acceptance

This includes testing that the web application supports business needs and processes. The main idea in user acceptance testing (or business process validation) is to ensure that the end product supports the users’ needs. For business applications, this means testing that the system allows the user to conduct business correctly and efficiently. For personal applications, this means that users are able to get the information or service they need from a web site efficiently.

In a corporate web page, the end-user testers may be from end-user groups, management, or an independent test team that takes the role of end users. In public web applications, the end-user testers may be beta testers, who receive a prototype or early release of the new web application, or independent testers who take the role of public web users.

Performance

This includes testing that the system will perform as specified at predetermined levels, including wait times, static processes, dynamic processes, and transaction processes. Performance is also tested at the client/browser and server levels.

Load/Stress

This type of testing checks to see that the server performs as specified at peak concurrent loads or transaction throughput. It includes stressing servers, networks, and databases.

Regression

Regression testing checks that unchanged parts of the application work correctly after a change has been made. Many people mistakenly believe that regression testing means testing everything you ever tested in an application every time you perform a test. However, depending upon the relative risk of the application you are testing, regression testing may not need to be that intense. The main idea is to test a set of specified critical test cases each time you perform the test. Regression testing is an ideal candidate for test automation because of its repetitive nature.

Usability

This type of testing assesses the ease of use of an application. Usability testing may be accomplished in a variety of ways, including direct observation of people using web applications, usability surveys, and beta tests. The main objective of usability testing is to ensure that an application is easy to understand and navigate.

Compatibility

Compatibility testing ensures that the application functions correctly on multiple browsers and system configurations. Compatibility testing may be performed in a test lab that contains a variety of platforms, or may be performed by beta testers. The downside with beta testing is the increased risk of bad publicity, the lack of control, and the lack of good data coming back from the beta testers.

Work Paper 22-2 is designed to assist testers in selecting testing types. The type of testing to be performed should be focused on the web-based risks addressed by the test plan. The test team should determine how the various types of web-based testing selected should be used to assess the various risks. This work paper, like Work Paper 22-1, should be developed through brainstorming and consensus by the web-based test team.

Table 22-2. Types of Web-Based Testing to Perform

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Types of Web-based Testing

This column contains the more common types of web-based testing. The names may need to be modified for your culture. Additional types of testing performed by your test group may need to be added to this column.

Perform

This field is used for the web-based test team to indicate which types of testing will be used during web-based testing. A check mark in the Yes column indicates the type of testing that will be performed, and check mark in the No column indicates that type of testing will not be performed.

Risk Focus

The web-based test team should indicate the risk that this test type will be used to address. The type of risk to be incorporated into the test plan has been identified on Work Paper 22-1. In addition, the column can be used to indicate the justification for not using various types of web-based testing, if appropriate.

How to Be Used

The web-based test team should write a brief narrative description of how they plan to use this test type to address the risks that will be incorporated into the test plan.

      

TYPES OF WEB-BASED TESTING

PERFORM

RISK FOCUS

HOW TO BE USED

YES

NO

Unit/Component

    

Integration

    

System

    

User Acceptance

    

Performance

    

Load/Stress

    

Regression

    

Usability

    

Compatibility

    

Task 3: Select Web-Based Test Tools

Effective web-based testing necessitates the use of web-based testing tools. A brief description of categories of the more common web-based test tools follows:

  • HTML tools. Although many web development packages include an HTML checker, there are ways to perform a verification of HTML if you do not use/have such a feature.

  • Site validation tools. Site validation tools check your web applications to identify inconsistencies and errors, such as moved or orphaned pages and broken links.

  • Load/stress testing tools. Load/stress tools evaluate web-based systems when subjected to large volumes of data or transactions.

  • Test case generators. Test case generators create transactions for use in testing. This tool can tell you what to test, as well as create test cases that can be used in other test tools.

Work Paper 22-3 is designed to document the web-based test tools selected by the test team, as well as how those tools will be used. The work paper should contain all of the specific test tools available to the web-based testing team.

Table 22-3. Select Web-Based Test Tools

Field Requirements

FIELD

INSTRUCTIONS FOR ENTERING DATA

Web-based Test Tool

All of the test tools available to your web-based test team should be listed in this column. The column contains generic types of test tools, but they should be replaced by specific test tools.

Perform

The web-based test team should identify which web-based test tool will be used during testing. A check in the Yes column indicates that the tool is to be used, and check in the No column indicates that the tool is not to be used.

Test Type Focus

The test team should indicate in this column which type of testing will be performed using this test tool. The test types are those indicated by the check mark in the Yes column on Work Paper 22-3. All of the test types with a Yes check mark on Work Paper 22-2 should be addressed in this column. Note that a single test tool may be used for multiple test types.

How to Be Used

The web-based test team should indicate in this test column how they plan to use a specific test tool during web-based testing. The testers should be as specific as possible in completing this column.

     

WEB-BASED TEST TOOLS

PERFORM

TEST TYPE FOCUS

HOW TO BE USED

YES

NO

HTML text tool

    

Site validation test tool

    

Java test tool

    

Load/stress test tool

    

Test case generator

    

Other (list tools)

    

Task 4: Test Web-Based Systems

The tests to be performed for web-based testing will be the types of testing described in the seven-step testing process, which is Part Three of this book. The seven-step process may have to be modified based on the risks associated with web-based testing.

Check Procedures

The web-based test team should use Work Paper 22-4 to verify that the web-based test planning has been conducted effectively. The Comments column is provided to clarify No responses. The N/A column is provided for items that are not applicable to this specific web-based test plan.

Table 22-4. Web-Based Testing Quality Control Checklist

  

YES

NO

N/A

COMMENTS

1.

Has a web-based test team been organized?

    

2.

Does the web-based test team understand the differences between client/server and web-based technology?

    

3.

Does the web-based test team understand web terminology?

    

4.

Does the web-based test team understand the risk associated with web technology?

    

5.

Has the web-based test team reached consensus on which risks are applicable to this specific web-based system?

    

6.

Has a determination been made as to how the identified risks will be incorporated in the test plan?

    

7.

Is there a consensus that the web-based risks not included in the test plan are of minimal concern to this web-based system?

    

8.

Has the web-based test team identified the types of testing required for this system?

    

9.

If so, how have those testing types been correlated to the web-based risks?

    

10.

Has the web-based test team reached consensus on how the web-based types of testing will be used for test purposes?

    

11.

Is there a portfolio of web-based test tools available in the organization?

    

12.

Are the available test tools adequate for the web-based system being tested?

    

13.

Has each type of testing that will be included in the test plan been supported by a specific web-based test tool?

    

14.

Has the test team reached consensus on how the test tools will be used during testing?

    

15.

Have all of the web-based testing decisions made by the test team been incorporated into the test plan?

    

Output

The only output from this test process is a report on the web-based system. At a minimum, this report should contain the following:

  • A brief description of the web-based system

  • The risks addressed and not addressed by the web-based test team

  • The types of testing performed, and types of testing not performed

  • The tools used

  • The web-based functionality and structure tested that performed correctly

  • The web-based structure and functionality tested that did not perform correctly

  • The test team’s opinion regarding the adequacy of the web-based system to be placed into a production status

Guidelines

Successful web-based testing necessitates a portfolio of web-based testing tools. It is important that these test tools are used effectively. These are some common critical success factors for buying, integrating, and using test tools:

  • Get senior management support for buying and integrating test tools. Top-down support is critical. Management must understand the need for test tools and the risks of not using test tools.

  • Know your requirements. This will help you avoid costly mistakes. You may not be able to meet all your requirements, but you should be able to find the best fit.

  • Be reasonable in your expectations—start small and grow. Your first project using any kind of tool is your learning project. Expect to make some mistakes. You can hedge your risk by applying the test tool(s) to simple tasks with high payback.

  • Have a strong testing process that includes tools. Until this is in place, test tool usage will be seen as optional and the tool may die because of lack of interest. In addition, people need to know how to define what to test.

  • Don’t cut the training corner. People must understand how to use the test tool. Most people will naturally use about 20 to 25 percent of the tool’s functionality. If training is not provided and the tool is not effective, don’t blame the tool.

Summary

This chapter provides guidelines on how to properly plan for web-based testing. Like other aspects of testing, web-based testing should be risk oriented. The chapter describes the risks, presents the types of testing that can be used to address those risks in testing, and provides guidance in using web-based test tools. The approach for testing web-based systems should be incorporated into a test plan and that plan should be followed during test execution. This chapter does not address the test execution part of web-based testing. Testers should follow the execution components of the seven-step testing process described in Part Three of this book.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset