Chapter 6. Analyzing and Performance Tuning the Web Tier

We use the term Web tier to refer to the front-end Web server used to satisfy HTTP requests from remote browser clients, such as Internet Explorer. Many different Web servers are available for various platforms. Microsoft’s implementation of a Web server is called Internet Information Server (IIS), and it is essentially a file and application server for Internet and private Intranet Web sites or applications. IIS is a very powerful Web server capable of hosting both static and dynamic content. Static content typically consists of images or simple text files created using HTML that rarely change. To the end user, dynamic content may appear to be static because the request that generated server-side scripting activity is executed on the server, and only the response is returned to the client. Dynamic content usually provides a far richer user experience than static content. Dynamic content can be highly customizable for each request. It is usually generated by some additional component or scripting language, which is processed by the Web server originating from a remote request. The types of dynamic applications discussed in this chapter are ASP.NET or traditional ASP Web applications. This chapter refers to the sample storefront IBuySpy ASP.NET Web application coded in the VB.NET language.

Getting Started

This chapter describes a process of identifying application bottlenecks that can occur on the Web tier. Rather than list every possible bottleneck, an impossible task, we will show you how to analyze your Web application. By sharing our experiences and method of profiling ASP.NET Web applications, we hope to help you quickly identify some common Web tier bottlenecks that may cause scalability issues with your Web application. After you have identified a bottleneck in your Web application, it is much easier to research the problem or seek help.

Although we began this chapter with a brief introduction to what a Web server is, we assume you have some knowledge and experience with IIS and Web-based applications. It is beyond the scope of this chapter to go into detail about Web server administration and configuration, but here is a list of resources for in-depth information on each topic.

  • IIS

    Microsoft Internet Information Services 5.0 documentation

  • ASP.NET

    Microsoft ASP.NET Step by Step by G. Andrew Duthie (Microsoft Press, 2002); Web Database Development Step by Step .NET Edition by Jim Buyens (Microsoft Press, 2002); Professional ASP.NET (Wrox Press, 2001).

  • ADO.NET

    Programming Microsoft Visual Basic .NET by Francesco Balena (Microsoft Press, 2002); Web Database Development Step by Step .NET Edition by Jim Buyens (Microsoft Press, 2002); Professional ADO.NET (Wrox Press, 2001).

Understanding Configuration and Performance

Before you begin performance testing it is very important you become familiar with several performance-related configuration aspects of your Web application. Configuration aspects such as the method of authentication and other global application settings help to give you a quick understanding of how your Web application works.

ASP.NET and ASP Web applications, while very different, can coexist on the same Web server because their file extensions are mapped to different DLLs within IIS. One major difference between ASP.NET and ASP applications is how they are configured. ASP.NET Web applications are configured by XML-based text files, where traditional ASP Web applications have many configurable parameters located in the metabase and the Registry. Storing configuration information in XML-based files makes it much easier to maintain the data in a readable format and update it on the fly without restarting the Web server.

ASP.NET File Extensions

When you first look at an ASP.NET Web application like the IBuySpy sample site you will notice many different file extensions. Some of the new file extensions that you should be familiar with are as follows:

  • ASPX

    This extension is used for Web form pages and is very similar to the traditional ASP pages.

  • ASCX

    These files hold the Web forms user controls. This provides one of the ways that ASP.NET reuses code.

  • ASMX

    Files with this extension are for files that implement XML Web services.

  • VB

    These files are for Visual Basic .NET code behind modules. When you create a Web application using Visual Basic .NET you will have a Visual Basic file associated with each Web form. These files allow for a separation of user interface elements and application logic.

  • CS

    This extension is similar to the VB extension except that the code is written in the new C# language. Code behind modules written in C# will have the same name as the Web form with a CS extension.

  • Global.asax

    This file is used to define application- and session-level variables and procedures when the Web application starts up or receives a request from a new user.

Authentication in ASP.NET

The three different types of authentication to use with ASP.NET Web applications are Windows, Passport, and Form-based. ASP.NET does not do all the authenticating; there are two distinct layers of authentication: IIS and ASP.NET application level. ASP.NET uses the <authentication> tag in the Web.config file to set the mode (more information on this in the next section).

Windows-based Authentication

The first authentication mode is for Microsoft Windows-based machines, where ASP.NET relies on IIS to authenticate the incoming requests. This form of authentication is primarily used for Intranet applications. The three different methods available for this configuration are Basic, Digest, and Integrated Authentication.

  • Basic Authentication

    This method works with most browsers, but it sends all passwords in clear text. For Internet sites, this method is tolerated as long as you have enabled SSL encryption, but it is not recommended.

  • Digest Authentication

    This method requires Windows 2000 Domain Controller and HTTP 1.1 (so it may not be supported by all browsers). The password is not sent in clear text—it is a hashed value, making it a little more secure. However, the domain controller has to store a clear-text password so it can validate the password. Thus the domain controller must be safe from outside attacks.

  • Integrated Windows (NTLM) Authentication

    This method is only available with Internet Explorer and is the most secure because it never sends the username and password over the network. It requires all users to have a Windows NT account on the Web server or the domain controller.

Passport Authentication

The second authentication mode is Passport. Passport is a centralized service provided by Microsoft which allows you to log in to any Passport-enabled site or Web application by simply using a single username and password (that is, single sign-in, or SSI).

Form-based Authentication

The last form of authentication is called form-based. This allows developers to create their own authentication within their Web applications. However, passwords are sent in clear text so make sure you add a SSL layer to protect your password. You simply create a login page and link it to ASP.NET in the Web.config file where you can set security restrictions. You can verify that username and password against a database or Windows 2000 Active Directory.

Configuration Files

ASP.NET uses a series of XML-based files to configure the Web application. The highest level configuration file is the machine.config file, which by default is located in [Your system folder]Microsoft.NETFrameworkversionx.x.xCONFIG. This file contains the default settings for all ASP.NET applications on your server.

Note

You must exercise great caution when editing this file because it affects all ASP.NET Web applications on the server.

There is another configuration file named Web.config that is specific to each application you create. Every Web application you create using Visual Studio .NET automatically creates this file for you. Do not worry if you are not using Visual Studio .NET to create your application. If there is no Web.config file, the application will inherit default values from the machine.config file. We will take a quick run through some of the values represented in these files to provide better understanding of the power of these files.

Now let’s dive down into the other tags that you will find within the configuration files. If you wish to find more information about the attributes for each of these elements please refer to your .NET Framework documentation.

Table 6-1. Configuration File Tags 

Tag

Description

<trace>

This element can help when you are trying to get more information about how your Web application is performing. It enables you to gather information about requests received by the Web server. (http://<servername>/<appname>/trace.axd.) Be sure to set this attribute to false when you deploy your Web application.

<globalization>

Specifies how Web requests and local searches are handled; for example, what language the requests are handled in.

<httpRuntime>

Controls parts of the ASP.NET and HTTP runtime engine, including attributes for number or requests before returning a 503, maximum size of incoming files and minimum number of threads that will be kept free for processing new requests.

<compilation>

One of the most extensive elements, which includes settings that determine how your code is compiled, such as debug. This will include debug information within the compiled assemblies. The debug attribute should be set to false when you deploy your Web application.

<pages>

Allows ways to configure the SessionState, ViewState and other settings that will enable you to get more out of your Web application.

<customErrors>

This element allows you to customize how your Web applications respond to errors in terms of what the user sees.

<authentication>

Allows you to choose the authentication mode you want to use.

<identity>

Allows your Web application to use impersonation.

<authorization>

Specifies accounts that are authorized to access resources.

<machinekey>

Specify keys for encryption and decryption of cookie data. However it can not be used at the subdirectory level.

<securityPolicy>

Allows the choice of several named security policies.

<trust>

Implements the security policy stated in the securityPolicy element.

<sessionState>

Used to configure the HttpModule element, mainly the state management to be used.

<httpHandlers>

Allows you to assign certain requests to different types of resources to handler classes. This can be used to limit the HTTP access to certain file types.

<processModel>

This setting deals with how the Web application is run and provides many features such as automatic restart and allowed memory size to help improve performance.

<webControls>

Allows the use of client-side implementations of ASP.NET server controls by specifying script files.

<clientTarget>

Allows you to use a single alias for your application.

<browserCaps>

Allows the application to gather information about the user’s browser.

Understanding Your Web Application

Some of the configuration settings listed above have adverse effects on your Web application or even generate problems when you’re creating test scripts for your Web application. For example, many people might find the <custom­Errors> element in the Web.config file useful because you can set up a custom error page to redirect to when an error occurs. When you build a test script in ACT, by default you do not get a visual indication of what is displayed on the page. If an error occurs in your test script while recording, you could be redirected to the custom error page, which receives a 200 status code (success) according to the IIS log file. The page that had the error would only show a 302 (redirect) instead of the true error. So you must be careful and understand your application, otherwise you could waste a lot of time trying to solve the problem.

Note

If you see a large percentage of page views occurring on one page, verify in the Webconfig file to make sure it is not the Custom Error Handling file for your ASP.NET Web application.

Profiling a .NET Web Application

There are several tools readily available to help you monitor and identify performance problems which occur on the Web tier. The profiling tasks discussed in this section include analyzing IIS log files, using the new tracing feature in ASP.NET, viewing performance data with the infamous System Monitor (which you should be very familiar with by now).

IIS Log Files

IIS log files serve many purposes, including analyzing user behavior or traffic patterns, monitoring activity for security exploits, and aiding in troubleshooting or identifying problems with your Web applications. The purpose of the following IIS log file discussion is to first give you a quick overview and then to demonstrate how to quickly identify performance problems at a high level (the page level) on the Web tier. After you have identified the poorly performing pages within your Web application, you can drill deeper to identify the specific code that is causing the problem and fix it. Let us begin by becoming more familiar with the log files generated by IIS in response to client activity.

Log File Formats

There are various logging modules and formats available with IIS. With the exception of the ODBC logging module, which is written to a database, all of the other log files are ASCII text files. The NCSA Common Log File Format and the Microsoft IIS Log Format are both fixed ASCII formats that are not customizable. We always use the W3C Extended Log File Format for our testing efforts because you can customize it by selecting the fields you want to monitor. From an administrative point of view this is very useful because you log less data, you can conserve more available disk space and keep your log files more readable without sacrificing functionality. All of the ASCII text-based modules discussed above can be set to create new logs when the file size reaches a certain threshold or timeframe (hourly, daily, weekly, monthly). We are not going into great detail about every available log file format in IIS, but we do discuss more information relating to the W3C Extended Log File Format because we use it for identifying problems in Web applications that we encounter.

By default the W3C Extended Log File Format uses Greenwich Mean Time (GMT) for times listed by each request, where all of the other formats use the local time. Keep in mind that the times listed in the log files are generated by the server after processing a request but do not reflect network travel time to the client or client processing time.

Tip

If you are building a test script or debugging your Web application, we recommend selecting every field. Otherwise, select only the relevant fields required to profile your Web application. This will conserve disk space and make parsing or navigating around the log file much quicker and easier.

Below is a sample from one of our log files in the W3C Extended Log File Format. It is worth mentioning that we did not select every available field to be logged, but only the relevant fields for this Web application.

#Software: Microsoft Internet Information Services 5.0
#Version: 1.0
#Date: 2002-05-24 17:25:01
#Fields: date time c-ip cs-method cs-uri-stem cs-uri-query 
sc-status sc-bytes cs-bytes time-taken 
2002-05-24 17:25:01 181.39.207.242 GET /storevbvs/default.aspx - 
200 12893 373 2516

The first four lines of a properly formatted W3C Extended log, which begin with a pound sign (#), contain directives or header information such as the version of the log file format, the date and time the file was created, and field identifiers for various information which is logged for each entry. The field identifiers are prefixed in Table 6-2.

Table 6-2. W3C Extended Log File Field Identifiers

Prefix

Meaning

s-

Server actions

c-

Client actions

cs-

Client-to-server actions

sc-

Server-to-client actions

Table 6-3 contains a complete list of available properties, definitions, and reference information for each field in the W3C Extended Log File Format.

Table 6-3. W3C Extended Log File Format Reference Table 

Field

Appears As

Description

Date

Date

The date on which the activity occurred.

Time

Time

The time the activity occurred.

Client IP Address

c-ip

The IP address of the client that accessed your server.

User Name

c-username

The name of the authenticated user who accessed your server. This does not include anonymous users, who are represented by a hyphen.

Service Name and Instance Number

s-sitename

The Internet service and instance number that was running on the client computer.

Server Name

s-computername

The name of the server on which the log entry was generated.

Server IP

s-ip

The IP address of the server on which the log entry was generated.

Method

cs-method

The action the client was trying to perform (for example, a GET method).

URI Stem

cs-uri-stem

The resource accessed; for example, Default.htm.

URI Query

cs-uri-query

The query, if any, the client was trying to perform.

Http Status

sc-status

The status of the action, in HTTP terms.

Win32 Status

sc-win32-status

The status of the action, in terms used by Windows.

Bytes Sent

sc-bytes

The number of bytes sent by the server.

Bytes Received

cs-bytes

The number of bytes received by the server.

Server Port

s-port

The port number the client is connected to.

Time Taken

time-taken

The length of time the action took.

Protocol Version

cs-protocol

The protocol (HTTP, FTP) version used by the client. For HTTP this will be either HTTP 1.0 or HTTP 1.1.

User Agent

cs(User-Agent)

The browser used on the client.

Cookie

cs(Cookie)

The content of the cookie sent or received, if any.

Referrer

cs(Referer)

The previous site visited by the user. This site provided a link to the current site.

Logging is enabled within IIS by default and can be disabled at the site, directory, or file level by right-clicking the element and clearing the Log Visits checkbox in the IIS MMC snap-in Properties dialog box, as shown in Figure 6-1.

Clearing the Log Visits checkbox
Figure 6-1. Clearing the Log Visits checkbox

Disabling logging on certain directories that contain static or rarely changing files is another useful method of reducing your Web server log file size and saving valuable disk space. For example just by browsing the IBuySpy Web application home page http://localhost/storevbvs/Default.aspx, you write 16 different entries in the IIS log file for that one request. Fourteen images, one style sheet, and the actual Default.aspx page are referenced in the code. Keep in mind that as far as the user is concerned, it is only one URL request, even though it’s several HTTP requests to the Web server (one request for Default.aspx, one for the style sheet, and 14 for the images). You can imagine how large the log file can grow and how much disk space can be consumed from a stress test script that requests multiple pages using several browser connections for an extended period of time.

There are several ways to verify how many items are referenced by a page and the size of each file. One such method useful in checking a single page is to first clear your browser cache and then request the page from your browser. You should see all of the different file elements referenced from the page from that one request. Add up the total number of file elements and their file size by viewing the file properties. This is a tedious method when you have many files to investigate. An alternative method that makes more sense to use when you have several files is to use a log parser and view the results in a report format. There are several commercial log parsers available today.

Identifying Problem Pages from a Log File

Now that we have presented some background information on what to look for in the log file let’s look at a somewhat real-world example of how to use the IIS log file to quickly identify errors occurring on the Web tier. The next four steps are necessary for us to demonstrate this example.

  1. The code in the IBuySpy sample site is very efficient, so we must first introduce a problem into the code of the ProductList.aspx page to simulate a page delay. We mentioned above that this is a somewhat real-world example because the code causes similar results on many different machines. If we tried to create a demo using code that performs poorly implemented string concatenation or some looping logic that returns many rows from a database, the performance and ASPX execution time from the ProductList.aspx page would vary depending on different hardware configurations. On the ProductList.aspx page, comment out line four as shown below.

    <%’@ OutputCache Duration="6000" VaryByParam="CategoryID" %> 

    Then insert the code below between the <% to %> on line five of the ProductList.aspx page.

    <%
    ‘/////////////////////////////////////////////////////////////
    ‘ TODO: -Comment out line 4 on ProductList.aspx.  This will
    ‘       -disable the OutputCache so we can introduce a delay.
        System.Threading.Thread.Sleep(7000)       ’7 second delay
    ‘/////////////////////////////////////////////////////////////
    %>
  2. Next, we disabled logging on the IBuySpy.css file, the Images subdirectory, and the ProductImages subdirectory in order to focus on tuning the code, which occurs on the ASPX file type.

    Note

    It is still important to optimize or reduce the size of your images and stylesheets to ensure that this does not become the bottleneck on your Web application.

  3. Verify that you have installed the IBuySpy sample site correctly. Then, within IIS, verify that you have selected W3C Extended Log File Format and selected the following fields: date, time, c-ip, cs-method, cs-uri-stem, cs-uri-query, sc-status, sc-bytes, cs-bytes, time-taken.

  4. Finally, we ran the Browse test script using Microsoft ACT for one iteration. The Browse test script is included on this book’s companion CD and is discussed in more detail in Chapter 3. This is not really considered a stress test but more of an automated walkthrough of our user scenario because we only ran through one iteration with one browser connection.

The results of the script playback from the IIS log file are as follows:

#Software: Microsoft Internet Information Services 5.0
#Version: 1.0
#Date: 2002-05-30 18:36:11
#Fields: date time c-ip cs-method cs-uri-stem cs-uri-query 
sc-status sc-bytes cs-bytes time-taken 
2002-05-30 18:36:11 181.39.207.242 GET 
/storevbvs/Default.aspx - 200 0 346 15
2002-05-30 18:36:18 181.39.207.242 GET 
/StoreVBVS/productslist.aspx CategoryID=20&selection=2 200 0 377 7016
2002-05-30 18:36:18 181.39.207.242 GET 
/storevbvs/Default.aspx test=count 200 0 357 16

Note that there are three requests in the above log. The first thing you should look for is to verify that the requests were successful and there were no errors generated. This is accomplished by looking at the field labeled sc-status, also known as the status code. Table 6-4 below is a useful table to reference HTTP status codes.

Table 6-4. HTTP Status Codes 

Status

Description

2xx

Success.

200

OK: The request has succeeded.

201

Created: The request has been fulfilled and resulted in a new resource being created.

202

Accepted: The request has been accepted for processing, but the processing has not been completed.

203

Non-authoritative Information.

204

No Content: No response-request received but no information to send back.

3xx

Redirection.

301

Moved: The data requested has a new location and the change is permanent.

302

Found: The data requested has a different URL temporarily.

303

Method: Under discussion, a suggestion for the client to try another location.

304

Not Modified: The document has not been modified as expected.

4xx

Error seems to be in the client

400

Bad Request: Syntax problem in the request or it could not be satisfied.

401

Unauthorized The client is not authorized to access data.

402

Payment Required: Indicates a charging scheme is in effect.

403

Forbidden: Access not required even with authorization.

404

Not Found: Server could not find the given resource.

5xx

Error seems to be in the server.

500

Internal Error: The server could not fulfill the request because of an unexpected condition.

501

Not Implemented: The server does not support the facility requested.

502

Server Overloaded: High load (or servicing) in progress.

503

Gateway Timeout: Server waited for another service that did not complete in time.

All three of the log entries are GET requests (indicated by the method) and they appear to be successful with 200 status code. Also, there appears to be very little data transferred (indicated by the Bytes Sent and Received fields). The ASPX execution time or the Time Taken field for the Default.aspx page was quick, but the ProductList.aspx took over seven seconds (7016 milliseconds, to be exact). Voilà, we have identified a problem. The IIS logs are very useful in helping you to identify pages that are executing slowly, transferring a lot of data and identifying errors in your Web application. Now that we have successfully identified a problem at the page level, we will discuss a new feature available in ASP.NET that will help us to trace the problem down to the line of code causing the delay.

Tracing Problems to the Code Level

Tracing is a useful new feature in ASP.NET for debugging or profiling problems which occur at the application, page, and code level of a Web application. You can print statements during code execution to help identify exactly what is happening at a certain point within your code. With traditional ASP pages, debugging or troubleshooting code is accomplished by inserting text or logic with multiple Response.Write statements at different points within the code as placeholders. To help you fully appreciate the new tracing feature in ASP.NET, we offer a brief discussion of our method to isolating slowly executing code within traditional ASP pages.

Tracing in Traditional ASP Pages

After the files with high execution times are identified from the IIS logs, we typically add several timers throughout the page to pinpoint the slowly executing code. When the page is requested the timers are written to a text file using the local file system of the Web server. Finally, you simply open the text file to view the timer information that was written between each block of code separated with a timer. Below is an example written in VBScript to illustrate this point.

<% Dim t1, t2
  ’Timer 1 – start timer for section 1
 t1=Timer
%>
INSERT CODE BLOCK HERE
<% ’Timer 2 – start timer for section 1
 t2=Timer
‘The following code can be placed at the end of the ASP file.
Dim fso, filename, fileref
filename =  "C:	emp" + cstr(Timer) + ".txt"
‘A new file is created each time the page is executed. 
SET fso = createobject("Scripting.FileSystemObject")
SET fileref = fso.createtextfile(filename  )

‘Write timer values to the file – time is in milliseconds.
fileref.writeline("1," + cstr(t1) + "," + cstr(t2) + "," +  cstr(t2-t1))
‘Close the file.
fileref.close
%>

Another method for finding slowly executing code within traditional ASP pages is to first make a note of how long it takes the page to execute without modifying the code. Then place the Response.End method at different places within your code and request the page again noting the execution time. The Response.End method stops the execution of the code so you can compare the time against the time taken from the original request. This method often takes several tries to identify the culprit and might end up generating errors, because you are not executing the code in its entirety.

Tracing in ASP.NET

Tracing in ASP.NET can be performed at either the page or application level. Page-level tracing is implemented by adding Trace="True” to the @ Page directive at the top of an ASPX file. The complete syntax is as follows:

<%@ Page  Trace="True"%>

This will append an HTML table to the browser once the original content has rendered. The HTML table will contain detailed information on the request itself: timing information, server control tree (with rendering viewstate size), header, cookies, querystring along with form parameters, server variables, and of course, the ability to add custom messages. The syntax to add custom tracking information is Trace.Write() or Trace.Warn(). Both methods create the same output but Trace.Warn() writes the output in red text.

Tip

We want to caution you about only enabling tracing when you need to debug your Web application. When trace was enabled on the ProductList.aspx page, the server-to-client bytes transferred for this page changed from 13628 bytes to 32695 bytes according to the IIS logs. This is nearly three times the amount of data in the original request and this can easily skew a stress test.

Enabling tracing at the application level is accomplished by adding or modifying the following statements to the Web.config file.

<configuration>
    <system.Web>
         <trace enabled="true"
            requestlimit="15"
            pageOutput="true"
            traceMode="SortByTime" 
            localOnly="true"/>
    </system.Web>
</configuration>

After trace is enabled in the Web.config file, you can view the results of various requests by browsing to a special HttpHandler (Trace.axd) to view the output. The requestLimit parameter will control the number of requests to log for which trace information will be collected. You should also be aware that page-level tracing will override application-level tracing.

Identifying Problem Code in ASPX Pages

Let’s take the above example, in which we used the IIS log file to identify a slow executing page, a step further and use the new Trace method discussed above to isolate the code that is causing a delay in our ProductList.aspx page.

  1. Make sure you enabled tracing at the application level by adding or modify the following syntax to the Web.config file located in the IBuySpy Web application root directory.

    <configuration>
        <system.Web>
            <trace enabled="true"
                requestlimit="15"
                pageOutput="true"
                traceMode="SortByTime" 
                localOnly="true"/>
        </system.Web>
    </configuration>
  2. Modify the following code block, which introduces the page delay by adding both statements beginning with Trace.Warn. These statements will write our custom messages around the suspected problematic code.

    <%
    ‘/////////////////////////////////////////////////////////////
    ‘ TODO: -Comment out line # 4 on ProductList.aspx.  This will
    ‘       -disable the OutputCache so we can introduce a delay.
      Trace.Warn("Find Delay", "Timer 1: Begin")  
        System.Threading.Thread.Sleep(7000)       ’7 second delay
      Trace.Warn("Find Delay", "Timer 1: End")    
    ‘/////////////////////////////////////////////////////////////
    %>
  3. Run the same Browse test script discussed above using Microsoft ACT for one iteration. This will simulate someone walking through our user scenario.

  4. Finally, on the Web server that contains the IBuySpy sample site, type in the following URL from within your browser: http://localhost/StoreVBVS/trace.axd. The three requests that our script made should be displayed as in Figure 6-2 below.

    Browse test script results
    Figure 6-2. Browse test script results

You should notice similar information displayed in the HTTP Handler (Trace.axd) compared to what we saw previously in the IIS log file. We already identified the ProductList.aspx page as the problematic page because it is taking more than seven seconds to load. Now click the View Details hyperlink for the ProductList.aspx page.

Because we used Trace.Warn the code we added should immediately stand out when you look at it online because the syntax is in red text. In Figure 6-3 below you can identify the code we added under the Trace Information heading and the Find Delay Category. From the Timer 1: Begin message to the Timer 1: End, it took around seven seconds to execute the code between the two statements we added. Voilà! Once again we successfully pinpointed the problem causing the delay. Now, you can verify that this is the problem by simply commenting out the suspected line of code and browsing directly to the ProductList.aspx page. The execution time should be reduced by seven seconds, indicating that indeed this was the problem causing the delay.

The problem code is in red text.
Figure 6-3. The problem code is in red text.

System Monitor Counters

System Monitor is an essential tool that can be used for monitoring and analyzing ASP.NET Web application performance. During performance testing, performance data can be analyzed in real time or collected for processing at a later time using System Monitor. Performance data is used in locating possible performance issues such as an inefficient processor, memory usage and any other factors that prohibits the application from performing and utilizing its targeted performance goals on the Web tier.

Performance Counters for IIS

In the following sections, we discuss the IIS counters and ASP.NET performance counters that our team uses in performance testing.

  • Internet Information Services Global: File Cache Flushes and File Cache Hits

    These counters can be compared to see the ratio of hits to cache clean up. A flush occurs when a file is removed from the cache. These global counters provide some indication of the rate at which objects are being flushed from the cache. Memory is wasted when flushes are occurring too slowly.

  • Internet Information Services Global: File Cache Hits %

    Displays the ratio of cache hits to total cache requests. This should stay around 80 percent on Web sites that have mostly static content.

  • Web Service: Bytes Total/sec

    Shows the total number of bytes sent and received by the Web server. A low number indicates IIS is transferring data at a low rate.

  • Web Service: Connection Refused

    Lower is better. High numbers indicate network adapter card or processor bottlenecks.

  • Web Service: Not Found Errors

    Shows the number of requests that could not be satisfied by service because the requested document could not be found (HTTP status code 404).

Performance Counters for ASP.NET

There are two sets of performance counters in ASP.NET that can be used in diagnosing and monitoring Web application performance. They reside under ASP.NET, and ASP.NET application performance objects. If you have multiple versions of ASP.NET installed, there may be multiple instances of these counters, each with a version stamp on them. The names without versions will always give you performance data for the highest version installed on the machine.

ASP.NET System Performance Counters

We will not discuss all performance objects and counters in the .NET Framework. All performance counters for ASP.NET information are found on the Microsoft MSDN Web site, and .NET Framework help file. In this chapter, we discuss in some detail the system performance counters, and application performance counters that our team uses in monitoring and analyzing performance of a .NET Web application.

  • Application Restarts

    Indicates the number of times and how often a Web application has been restarted. An application restart can occur because of changes in configuration, bin assemblies and too many page changes. This value is reset every time to 0 when the IIS host or w3svc restart.

  • Requests Queued

    The number of requests waiting for service from the queue. When the number of requests queued starts to increment linearly with respect to client load, this is an indication of reaching the limit of concurrent requests processed on the Web server.

  • Requests Rejected

    Shows the total number of requests not executed due to insufficient server resources to process the requests. This counter represents the number of requests that return a 503 HTTP status code “Server is too busy”. The value of requests rejected counter should ideally be 0.

  • Request Wait Time

    The number of milliseconds that the most recent request waited for processing in the queue. The average request should ideally spend very little time waiting to be processed.

ASP.NET Application Performance Counters

ASP.NET supports the application performance counters that can be used to monitor the performance of a single instance of an ASP.NET application. A unique instance appears for these counters, named _Total_, which aggregates counters for all applications on a Web server. The _Total_ instance is always available. The counters will display zero when no applications are present on the server.

  • Cache Total Turnover Rate

    The number of additions and removals to the total cache per second. Large turnover indicates the cache is not being used efficiently.

  • Errors Total

    The total number of parser, compilation, or runtime errors that occur during the execution of HTTP requests. A well-functioning Web server should not be generating errors.

  • Request Execution Time -

    The number of milliseconds taken to execute the last request. The value of this counter should be stable.

  • Requests Failed

    The total number of requests that have timed out, requests that are unauthorized (HTTP status code 401), requests that are not found (HTTP status code 404 or 414), or that resulted in a server error (HTTP status code 500).

  • Requests Not Found

    The number of requests that have failed due to resources not being found (HTTP status code 404, 414).

  • Requests Not Authorized

    The number of requests that have failed due to unauthorized access (HTTP status code 401).

  • Requests Timed Out

    The number of requests that have timed out.

  • Requests/Sec

    The number of requests executed per second. Under constant load, the number of requests/sec should remain within a certain range.

The above section covered the IIS counters, and ASP.NET performance counters that our team regularly uses in monitoring and analyzing ASP.NET Web applications.

Performance Tuning Tips

Performance tuning involves fixing bottlenecks and tweaking code to achieve your desired throughput rate or response time criteria while maintaining scalability. Using new features in ASP.NET, such as caching and new data access methods, can help you realize greater performance gains and scalability. Disabling certain default features, like Session State and ViewState whenever they are not used can have a positive affect on the performance of your Web application too.

Application and Session State

Maintaining state without creating performance and scalability problems in a Web application distributed among multiple Web servers proved to be challenging in the past. There are more options available for ASP.NET Web applications compared to traditional ASP Web applications, but you still must be aware of the performance versus scalability tradeoffs for each option.

Application State

Traditionally application variables were used to store information like connection strings or as a caching mechanism for storing variables and recordsets among multiple users requests. They still exist in ASP.NET but many of the functions previously served in traditional ASP have been replaced with newer more effective methods. Use the Web.config file to store and retrieve database connection strings or use a trusted connection with a SQL server. Utilize the new caching engine discussed below to store frequently accessed data. Application state still has the limitation and cannot be shared across multiple Web servers.

Session State

Session information is data stored in memory of the Web server for each user making a request. In the past, many problems have been associated with enabling and using session state within a Web application. The underlying protocol used in making each request (HTTP) is stateless, so to overcome this in traditional ASP Web applications an HTTP cookie was assigned to the client and would be passed back to the server for subsequent requests within a certain time frame. For a Web application located on multiple machines or a Web farm, the user may be redirected to a different machine between requests and the session cookie would be lost because it could not be passed among multiple machines.

ASP.NET addresses some of the scalability issues previously associated with using session data in Web farms by offering the option to store it out-of-process in a Windows service or to store it in a SQL server. Keep in mind that scalability is gained but there is a performance hit associated with running the session out-of-process. Session state is enabled by default in the Machine.config file and is set to run in-process. Running Session InProc has the same limitations for ASP.NET Web applications as discussed above for traditional Web applications, however it is the fastest most efficient method to use if session state is required. Our recommendation is if you do not absolutely have to use session state, then disable it in the Web.config file or at the page level. You can disable it at the page level by using <%@ Page EnableSessionState = “False” %>.

Caching in ASP.NET

Caching has been greatly improved with ASP.NET and when used properly can boost application performance significantly. With traditional ASP, caching was implemented by either storing all of your data in session variables, application variables, or by using a custom caching solution. These methods are still available in ASP.NET, but ASP.NET has even more options available to the developer. With the new caching mechanisms the output of entire pages can be cached via a simple directive. Additionally, there is an advanced caching engine and a caching API that can be used to store any arbitrary piece of information that will be reused often.

Note

We recommend output caching frequently accessed pages in your ASP.NET Web application whenever possible, but you should always follow up your tuning efforts with testing. Be careful not to go overboard because caching too much data can use valuable memory resources. To ensure that your caching implementation is effective, you can monitor the performance counter ASP.NET ApplicationsOutput Cache Turnover RateTotal . This counter should remain low or commensurate with the expiration or invalidation rate of the cached pages.

Output Caching

The ASP.NET output cache can use memory on the server to store the output of processed and rendered pages. If output caching is enabled, the output of a page is saved after the first request. Subsequent requests for the same page are then retrieved from the cache, if the output is still available, and returned to the user bypassing all the overhead of parsing, compiling and processing. This greatly improves the response time and reduces utilization of the server’s resources.

This feature can easily be enabled for pages by including the OutputCache directive within the page. For example, to save the output of a processed page for a maximum of 60 seconds, using the most basic syntax, you can include the following directive in the page:

    <%@ OutputCache Duration="60" VaryByParam="None"%>

The Duration and VaryByParam attributes are required.

Note

It is recommended that pages which are output cached have a Duration of at least 60 seconds, or the turnover rate of the page may hinder rather than benefit performance.

For pages that are short lived but have potentially expensive-to-obtain data, it may be better to utilize the Cache object to cache and update the data as needed (see the section on “Cache API” below). The VaryByParam attribute allows you to save multiple versions of a page. For example, pages can be designed to produce varying output based on the values of the parameter sent. Specifying a value of None for the VaryByParam attribute saves the output for the page if it is accessed without any parameters. To save versions of the page for all combinations of parameters, you can pass a value of *. You must, however, be aware that caching multiple versions of a page will consume additional memory. To cache output based on a specific querystring parameter or form field within the page, you can specify the name of the parameter. Multiple parameters can be included by separating them with a semicolon. For example, if a page has a form with ProductCategory and Product fields, you can cache the output based on values supplied for these parameter with the following syntax:

<%@ OutputCache Duration="10" VaryByParam="ProductCategory;Product"%>

Besides the two required attributes that are supported by the OutputCache directive, there are three additional attributes all of which are optional; these are Location, VaryByCustom, and VaryByHeader. The Location attribute controls where the data will be cached (for example, the server or client). VaryByHeader can cache based on specific headers sent with the request, and VarybyCustom can be used to cache based on browser type when specified with a value of Browser or can be used to implement custom logic when supplied with any other value.

Fragment Caching

Fragment caching is similar to output caching in the sense that the directive is the same. This level of caching is used to cache portions of a page that are implemented as user controls and is also referred to as partial page caching or user controls caching. Fragment caching should be considered whenever there is a lot of information to cache and caching at the page level is prohibitive in terms of server memory and cache utilization. Again, as with output caching, fragment caching is best used to cache output that does not vary tremendously or for output that is resource intensive.

The Output directive to implement fragment caching has to be included as part of the file implementing the control. The Duration and VaryByParam attributes are required and are exactly the same as in output caching. Additionally, there is the VaryByControl attribute, which is specific to fragment caching and can be included only in the user control file.

    <%@ OutputCache Duration="60" VaryByParam="None"%>

Caching API

The caching API lets you save any piece of information in server memory that you want to reuse. For example, let us say that you need to display the product categories on a page in addition to other information. Rather than retrieving this information from the database with every request to the page, you can save the categories via the caching API. The most basic syntax to cache something is:

Cache("mydata"}() = "some data"

You can store entire data sets besides just strings and numeric data. Retrieving the cached data is just as simple:

X = Cache("mydata")

Other useful methods to be aware of are the Remove method, used to remove an item from the cache and the Insert method to add items to the cache. The syntax for the Remove method is:

(Cache.Remove("mydata"))

The Insert method is an overloaded method of the Cache object and has several versions. For example, the following version of the Insert method can be used to add an item to the cache with no external dependency and with an absolute expiration time of 120 minutes from the first time the page is cached:

Cache.Insert("mydata", mydata, nothing, _
            DateTime.Now.AddMinutes(120), TimeSpan.Zero)

The last parameter in the previous example is known as the sliding window and can be used to set an expiration for a cached item relative to the time the item was first placed in or last retrieved from the cache The sliding value parameter can be thought of as the maximum length of time between successive calls that need to elapse before a cached item is removed from the cache. For example, to place an item in the cache for a maximum of 10 minutes between successive retrievals, you can use the following syntax of the Cache object’s Insert method:

Cache.Insert("mydata", mydata, nothing, DateTime.MaxValue, _
            TimeSpan.FromMinutes(10))

Disabling ViewState

ViewState saves the properties from one page (usually from a form) to the next by saving and encoding the data for each server control to a hidden form field rendered to the browser. The size and contents of the ViewState data can be determined by using the Trace directive at the page or application level as discussed in the previous section, "Profiling a .NET Web Application.” If many server controls are used, the size of the ViewState data can become quite large and hinder performance of your Web application. As a best practice it is advisable to disable ViewState unless you absolutely need to use it. You can disable ViewState by setting the property EnableViewState="false” at the page or control level.

ADO.NET Tips

Most Web applications are built with a back-end database management system. Connecting to this data tier and manipulating the data is critical to application performance, among other factors such as the amount of data being transferred and the database design. This is where an understanding of the object model of ADO.NET becomes important. Using the correct object and right method can make a difference to application performance, especially under load. This section will highlight some of the recommended practices for retrieving and affecting data at the data source but is by no means exhaustive. The references listed at the beginning of the chapter are suggested for more detailed information on ADO.NET.

The .NET Framework ships with two .NET data providers: The OLE DB .NET Data Provider and the SQL Server .NET Data Provider. The OLE DB .NET Data Provider can be used to connect to any data source for which there exists an OLE DB Provider, for example Microsoft SQL Server or an Oracle database, but is primarily intended for non-SQL Server databases. For applications that use Microsoft SQL Server versions 7.0 or higher, the SQL Server .NET Data Provider is the better choice. This provider has been optimized specifically for SQL Server and implements some SQL Server–specific functionality.

Note

Use the SQL Server .NET Data Provider with SQL Server versions 7.0 and higher.

SqlConnection Object

The first step in communicating with the data tier is to establish a connection with the database server. The SQL Server .NET Data Provider gives us the SqlConnection object for this purpose. Creating a connection is fairly straight forward; the following VB.NET sample code demonstrates opening a connection to the SQL server on the local machine and connecting to the Pubs database:

Dim strCnStr As String = "Data Source =.;" _
             & "Integrated Security=SSPI;" _
             & "Initial Catalog = Pubs"
Dim objCn as New SqlConnection(strCnStr) _
    objCn.Open()

By default, this data provider takes advantage of connection pooling. This helps reduce the overhead of establishing a connection each time it is requested because all of the work is done up-front when the first connection is established. It is important, to understand when this feature is taken advantage of. For a pooled connection to be utilized, the connection strings of all new connections has to match that of the existing pooled connections exactly. Even an extra space in the string will cause the .NET runtime environment to create a separate pool. In fact, the .NET runtime creates a separate connection pool for every distinct connection string. This implies that connections utilizing different usernames and password will fail to take advantage of pooled connections.

Note

It is recommended that applications either use integrated security whenever possible or implement a common application username/password that is shared by all users in order to improve the efficiency of the connection pooling usage.

The other factor that determines whether a pooled connection is utilized is the transaction context. A second connection will use a pooled connection as long as the transaction context is the same as the initial connection or does not have one at all.

Controlling the size of the connection pool is affected by specifying the min and max properties. This is important if you need to control the amount of memory utilized at the Web tier. If all pooled connections are active, any extra connection request will be blocked until one is relinquished or the connection time out has expired (the default is 15 seconds). The following code demonstrates setting these properties as part of the connection string:

Dim strCnStr As String = "Data Source =.;" _ 
             & "Integrated Security=SSPI;" _
             & "Initial Catalog = Pubs;" _
             & "Min Pool Size=10;" _
             & "Max Pool Size =100"

Another property that can have an effect on performance is the packet size. For applications that transfer large blob or image fields, increasing the packet size can be beneficial. In cases where the amount of data transferred is small, a smaller value for the packet size may be more efficient. The following code demonstrates setting this property as part of the connection string:

Dim strCnStr As String = "Data Source =.;" _ 
             & "Integrated Security=SSPI;" _
             & "Initial Catalog = Pubs;" _
             & "Packet Size=32768"

SqlCommand Object

A common scenario for Web applications is the retrieval/modification of data from the data source. The SQL Server .NET provider implements the SqlCommand/DataReader and the DataAdapter /DataSet classes that allow the user to retrieve/modify the data. We only briefly discuss the SqlCommand/DataReader in this book; information on the SqlDataAdapter/DataSet classes can be obtained from other sources that deal specifically with ADO.NET.

The SqlCommand/DataReader is connection oriented and provides certain methods that can be leveraged to improve application performance. These methods include the ExecuteNonQuery, ExecuteScalar and ExecuteReader. Additionally the SqlCommand class implements the ExecuteXmlReader method for data returned in XML format. A description of these four methods along with an example (VB.NET Console application) for each follows. The examples make a connection to the NorthWind database, which by default is installed with Microsoft SQL Server.

ExecuteNonQuery Method

This method is typically used with Insert, Update, and Delete operations. The only piece of information that is most useful in these cases, and that is returned to the client, is the number of rows that are affected. This method will also work with stored procedures that contain output/return parameters, which can be returned to the client. The following Visual Basic .NET code demonstrates this method by calling a stored procedure that returns a count of the number of customers in the Customer table in the NorthWind database as a return value:

Imports System
Imports System.Data
Imports System.Data.SqlClient

Module ExecuteNonQuery

    Sub Main()
        Dim strConnString As String = "Data Source=.;" _
                & "Initial Catalog=Northwind;" _
                & "Integrated Security=SSPI"
        Dim strSQL As String = "GetNumberOfCustomers"
        Dim sqlConn As New SqlConnection(strConnString)

        Dim sqlComd As New SqlCommand(strSQL, sqlConn)
        sqlComd.CommandType = CommandType.StoredProcedure
        sqlComd.Parameters.Add(New _
            SqlParameter("@i", SqlDbType.Int))
        sqlComd.Parameters(0).Direction = _
            ParameterDirection.ReturnValue

        sqlConn.Open()
        sqlComd.ExecuteNonQuery()
        sqlConn.Close()

        Console.WriteLine("Number of customers = {0}", _
            CType(sqlComd.Parameters(0).Value, Integer))
    End Sub
End Module
ExecuteScalar Method

This method should be used whenever you need to retrieve a single value from the data tier, for example, if you need a count of customers or the customer ID of a single customer. To demonstrate this method, the following Visual Basic .NET code retrieves the count of customers from the Customers table in the NorthWind database:

Imports System
Imports System.Data
Imports System.Data.SqlClient

Module ExecuteScalar

    Sub Main()
        Dim strConnString As String = "Data Source=.;" _
                & "Initial Catalog=Northwind;" _
                & "Integrated Security=SSPI"
        Dim strSQL As String = "select count(*) from customers"
        Dim sqlConn As New SqlConnection(strConnString)
        Dim sqlComd As New SqlCommand(strSQL, sqlConn)

        sqlConn.Open()
        Dim o As Object= sqlComd.ExecuteScalar()
        sqlConn.Close()

        Console.WriteLine("Number of customers = {0}", _
            CType(o, Integer))
    End Sub
End Module
ExecuteReader Method

Any time you need to return a single data row or multiple data rows containing a lot of columns, you should use this method. This method is useful for a one-time pass of the returned data only. To demonstrate this method, the following Visual Basic .NET code retrieves the customer ID, contact names, and phone number of customers in the Customers table in the NorthWind database.

Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualBasic

Module ExecuteReader

    Sub Main()
        Dim strConnString As String = "Data Source=.;" _
                & "Initial Catalog=Northwind;" _
                & "Integrated Security=SSPI"
        Dim strSQL As String = _
            "select customerid,contactname,phone from customers"
        Dim sqlConn As New SqlConnection(strConnString)
        Dim sqlComd As New SqlCommand(strSQL, sqlConn)

        sqlConn.Open()

        Dim sqlDR As SqlDataReader = _
            sqlComd.ExecuteReader(CommandBehavior.CloseConnection)

        Do While sqlDR.Read()
            Console.WriteLine(sqlDR("customerid").ToString() _
                & ControlChars.Tab _
                & sqlDR.GetSqlString(1).ToString() _
                & ControlChars.Tab _
                & sqlDR.GetSqlString(2).ToString())
        Loop

        sqlDR.Close()
    End Sub

End Module 

In cases where you are sure only one row is returned to the client, you can call this method by supplying the SingleRow value of the Command­Behavior enumeration as a parameter. The syntax with this optional parameter is:

Dim sqlDR As SqlDataReader = _
    sqlComd.ExecuteReader(CommandBehavior.SingleRow);

Data values in each row can be referenced either by name or ordinal position as illustrated in the example shown above. In general using the ordinal position of a data item achieves slightly better performance. Additionally, if you know the data types being returned, a further gain can be achieved by using the SQL Server .NET Providers type-specific methods to return data values. There are several such methods, such as GetSqlString and GetInt32.

Tip

Try to use the type-specific methods of the SQL Server .NET Provider whenever possible.

ExecuteXMLReader Method

This method is useful when data is returned from SQL Server in XML format. For example, to return the data in XML format, the SQL statement can be modified to instruct SQL Server to return the data in XML format. The example that follows demonstrates using this method:

Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Xml

Module ExecuteXmlReader

    Sub Main()
        Dim strConnString As String = "Data Source=.;" _
                & "Initial Catalog=Northwind;" _
                & "Integrated Security=SSPI"
        Dim strSQL As String = "SELECT customerid," _
                     & "contactname," _
                     & "phone " _
                     & "From customers " _
                     & "FOR XML AUTO"
        Dim sqlConn As New SqlConnection(strConnString)
        Dim sqlComd As New SqlCommand(strSQL, sqlConn)

        sqlConn.Open()

        Dim xmlR As XmlReader = sqlComd.ExecuteXmlReader()

        Do While xmlR.Read()
            Console.WriteLine(xmlR.ReadOuterXml())
        Loop

        xmlR.Close()
        sqlConn.Close()
    End Sub

End Module

Common Web Tier Bottlenecks

Web tier bottlenecks can occur due to many reasons such as configuration problems, lack of hardware resources, inefficient design, or use of code. It is always useful to eliminate configuration issues by keeping your build documentation and build scripts up-to-date and verifying your configuration, especially when major code changes occur.

Effective stress testing can assist you in determining if your Web application will scale up or out by adding additional hardware. Assuming your Web application can scale, one method is to throw more hardware at your application. The downside to this is that it often requires more support hours, because there is more hardware to manage. There is a more detailed discussion on scalability at the end of this chapter. The best method of meeting or exceeding your performance goals is to identify the bottlenecks and fix or tune the code. This method is a cyclical process and requires performance testing and tuning throughout the software development life cycle. In this section we discuss some general best practices and also share our experiences with some of the newer, more effective coding techniques currently available.

Limiting Page Size

One of the most common bottlenecks we often encounter from the Web tier is a result of the dreadful never-ending page. Passing too much data per page can cause performance issues on both the IIS server and network tier. This may seem very obvious, but many Web applications we analyze suffer from slow response times as a result of having pages that are simply too large. Do not be afraid to divide up the content when necessary. This may cost your users an additional click to get to the data they are looking for but your content will load much quicker. Here are some other tips which can help to provide your end users with a better experience and quicker response times:

  • If your Web application returns huge record sets, look into paging the results.

  • Remove white space and comments from your code or HTML. This sends less data over the wire.

  • Remove unused styles from your stylesheets.

Limiting Images

Optimize all images and use them sparingly or when they provide some kind of value to your Web application. Reusing the same image causes less network round trips because most browsers can cache the image on the client, and not have to go to the Web server to get the image each time. It is more efficient to use one larger image than multiple smaller images. Quite often images are used for advertisements and are loaded from another site outside of your control. You should be aware that this creates a dependency on another site that you cannot control and in extreme cases can cause your page to timeout if the resource becomes unavailable or becomes extremely slow when loading. If your site is very dynamic and graphically intensive, consider splitting your dynamic content and your images to separate Web servers and tune each accordingly.

Using Naming Conventions

Come up with a naming convention that makes sense and is readable but keep the directory structure as flat as possible. Keep file, directory, and variable names short and sweet, and abbreviate whenever possible. By doing this you will pass less data in each request, which can really make a difference because it is common to reference many file types (like images, StyleSheets, client side scripts, and so on) within HTML. You should avoid a directory structure like the following URL, which contains 68 characters:

http://yoursite/goodoldunitedstatesofamerica/northcarolina/pictures/

Just by abbreviating the directory structure you can eliminate 41 characters for just one request.

http://yoursite/us/nc/pics/

Disabling SSL

Use SSL only when necessary or required within your Web application. Test your pages with and without SSL enabled to determine the impact of encrypting your data. Generally speaking we notice a 20 to 30 percent decrease in performance by using SSL. If you organize your content effectively you can create distinct folders which house content so you can enable or disable SSL per directory. In most cases there is no need to enable SSL for images, stylesheets or other file types such as client-side scripts.

Trying New Features

As new programming methods and features become available, do not be afraid to test them out. Many new coding features are designed not only to provide additional functionality but to gain performance over previous methods used to solve various problems. For example, Response.Execute has been available for some time now, but we still frequently see people using Response.Redirect within their ASP/ASPX code. When you use Response.Redirect your users will incur an additional network round trip for the same operation. We have also seen many .NET Web applications which suffer major performance problems stemming from inefficient string manipulation or concatenation within loops. Using the new .NET StringBuilder class often resolves this problem and can add huge performance gains when concatenating many strings together in a loop.

Scaling the Web Tier

Scalability is defined as having the ability to add resources to a system that increase its performance (decrease response times or increase throughput). From the performance testing perspective, this means adding more hardware or redesigning your Web application to allow more users to access your Web application more efficiently. To cover scalability we will focus on the Web tier and a methodology to know when and how to scale it.

Scale Out, Scale Up, or Performance Tune?

The term scalability typically is covered by two distinct yet similar methods, scaling up and scaling out. These methods should only be used after your Web application has been performance tested and performance tuned. Performance testing helps you identify bottlenecks and limitations of your current Web application. Through performance tuning you will increase throughput, decrease your response times, or both. The following list definitions and the pros and cons for scaling out, scaling up, and performance tuning.

Scaling Out

Scaling out your Web tier is adding extra Web servers to your application to overcome a bottleneck or limitation caused by this tier. The benefit of scaling out your Web tier is an increase in throughput, provided there are no network, SQL, or other bottlenecks external to your Web server. The downside of scaling out is that it can be expensive in terms of hardware cost, software cost, and production support cost (power, rack space, cooling, and so on). Additionally, this puts more of a burden on support staff and deployment.

Scaling Up

Scaling up your Web tier means adding extra hardware like memory and CPU capacity to your servers to overcome a bottleneck or limitation caused by this tier. This method is less expensive then scaling out because memory and CPU costs are relatively inexpensive compared to buying the whole machine. However, this approach may not give you linear gains in performance. To justify scaling up, performance testing is required both before and after scaling up to determine the overall impact.

Performance Tuning

Performance tuning is simply fixing bottlenecks on your application to achieve your desired throughput rate or response time criteria. In other words, fix the code instead of scaling your hardware. This method can be the most expensive if not performed during the software development life cycle, because it involves high labor cost for software developers, test engineers, and support engineers.

When to Scale your Web Tier?

A common mistake when building and deploying a Web application is using an unnecessary amount of hardware to solve issues and overcome bottlenecks. Performance tuning your application will save time and money because it can give you a better idea of when additional hardware via scaling up or scaling out is required. By using steps detailed in Chapter 2, you can identify the business requirements for the approximate number of customers who will access your Web application. Then you can run performance tests and tune your Web application to meet and exceed these estimates. You can also perform a transaction cost analysis, which is outlined in Chapter 9, to determine the maximum number of users your Web application can handle and to help with capacity planning

You should scale your Web tier only after all other performance constraints are identified and resolved. For example, if your SQL tier has several performance issues that limit the number of users to a quantity that a single IIS Server can handle you should fix your SQL issues first. There is no point in scaling out your Web tier by adding more IIS servers or upgrading your existing servers with more and faster CPUs and memory if your SQL tier is at peak capacity.

How to Scale Out your Web Tier?

To have fault tolerance and redundancy every Web application should have a minimum of two Web servers. You may build redundancy into a single machine but just having one Web server limits your application to a single point of failure. The simplest way to scale out your Web tier is by adding additional Web servers and to use a hardware or software based load balancing solution.

Software-based Load Balancing

Microsoft’s implementation of a software-based load balancing solution is called Network Load Balancing (NLB). Using NLB is typically the least expensive method of load balancing and uses services bundled in Windows 2000 Advanced Server, Windows 2000 Data Center Edition, and Windows .NET Server. This method is good for most Web applications and allows you to scale out several nodes in your Web tier. For detailed instructions and implementation practices visit http://www.microsoft.com.

Hardware-based Load Balancing

A hardware-based load balancing solution is optimal because it is a layer separate from your Web application’s code, so it does not use resources from your Web server. Many companies, such as Cisco, F5 Labs, and Extreme Networks, provide hardware-based load balancing products and solutions. For detailed information on configuring and installing a load balancing solution, visit the Web sites of the vendors listed above.

Conclusion

After you understand your Web application’s configuration, you can begin to identify bottlenecks at the Web tier. You profile your .NET Web application by monitoring the IIS log files and System Monitor data, and pinpoint delays or other bottlenecks within the code by using the new trace feature in ASP.NET. You can make large performance gains by tuning your code and using new methods or features in ASP.NET, like output caching, whenever possible and appropriate.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset