IN THIS CHAPTER
You can find some very interesting statistics about customer behavior based on a website’s performance and responsiveness on the Internet. This includes the fact that more than 40 percent of website visitors will leave your website if the page takes more than three seconds to render. And that, in most cases, visitors to a website expect the page to render in two seconds or less. Also, 65 percent of shoppers dissatisfied with the performance of a website will never return, and will go someplace else next time. Performance of a website is one of the most important reasons visitors remain loyal (see Figure 3-1).
The normal human being can notice an event that takes approximately 500 ms or one-half a second, and will not notice anything that happens in less time than that. However, according to Amazon.com, an increase in the response of Amazon’s system by as little as 100 ms results in a 1 percent loss in sales. In dollar terms, that’s in the millions. Yahoo! also found that an increase of load time by 400 ms resulted in a 5–9 percent drop in its traffic.
Customers and visitors will not use your site if the performance is poor. It’s that simple. This chapter discusses some ways you can optimize your system for performance, and keep your customers or users happy and returning. Some points discussed in this chapter:
How do you know if your site has a performance problem? When someone mentions your website performs slowly, how do you determine if it really is? If you do not capture performance metrics, you’ll spend a lot of time in meetings and on the phone explaining that the experienced performance is within the normal range. Without documented trends and metrics, both you and your customer will debate performance based on the perceived response times.
A number of tools are available to help you manually measure performance of your system. You can use one or all these tools to capture, store, and report the performance metrics of your system:
Although you have tools and solutions to gather performance metrics automatically, they do have an impact on performance. Before you implement an automated solution for this, make sure you know how much impact it has on the system.
A recently published website that provides some useful tips for ASP.NET optimization can be found at http://webdevchecklist.com/asp.net/. The website provides links to tools categorized into topics such as Security, Code Quality, Mobile, Performance, and so on. The checklist has a good set of tasks to ensure your website is ready to go into a live environment.
The Performance category for this site contains a number of actions for optimizing an ASP.NET website for performance, for example:
In Chapter 4 you use Google PageSpeed to define a list of non-optimal configurations on the sample ASP.NET website. The report that Google PageSpeed generates includes suggestions for enabling browser caching, using compression, scaling images, and implementing bundling and minification. When you implement these suggestions, you’ll see a reduction in the size of the page request and the time required to complete the rendering of the homepage.
You can choose any or all of the other online website analysis tools (such as YSlow and Modern.IE), correct the issues, and implement their suggestions. You might also consider some third-party solutions, such as ANTS Performance Profiler or the Performance Analyzer in Visual Studio. Both of these tools enable you to analyze ASP.NET performance, for example. Figure 3-2 illustrates the results of running Visual Studio Performance Analyzer on the ASP.NET website.
The report identifies the methods in the ASP.NET website that takes the longest to execute. This is useful information and you should use it to find places in your system that are not performing well and, therefore, should be optimized.
Bundling and minification are features available in ASP.NET 4.5 and are introduced through the System.Web.Optimization namespace. They provide a mechanism for reducing the content size and number of round trips required to completely render a web page. The file types that generally benefit from either bundling or minification are typically JavaScript and CSS files, but are not limited to these types. For example, any file type that has a significant amount of white space or long variable names can benefit from this capability.
Using the F12 Developer Tool Suite in Internet Explorer, you can see the total number of requests required to render a page, as well as the download speed and file sizes. Figure 3-3 shows three occurrences of text/css that have to a total size of 553 bytes and take 1.26 seconds to download. Likewise, two occurrences of application/javascript file types are requested to render a single page with a total size of 3.15 KB taking approximately 500 ms to download. As shown, each one of the files constitutes a separate GET request, and because there is a limit of six concurrent connections from a browser to the hostname, request number seven must wait until one of the other requests completes. The wait time is represented by the empty sections of the timing bar.
If a web page on your website executes a lot of GET HTTP commands to render a page, you should certainly consider implementing bundling if not for the reason mentioned in the previous paragraph, then for the following reason: Requests traveling over a network should be compacted as much as possible. There is an overhead for network packet frames in the form of serialization, for example for each packet. Therefore, if you can reduce the number of frames, you can realize an additional performance gain. Figure 3-4 illustrates a chatty request (top figure) versus a chunky request (bottom figure). A chatty request is one which has a large number of frequent small packets, while a chunky request has a small number of large packets.
In summary, you can implement bundling with few lines of code, as discussed in Chapter 4. This reduces the number of GET requests that the client makes to the server by merging similar file types together into a single file. This also reduces any delay caused by connection limits and network packet framing.
The code shown in Listing 3-1 defines how to bundle a group of JavaScript files together. The requirements are to first create an instance of the Bundle class, which is part of the System.Web.Optimization namespace. You then identify the files and their relative path and add them to the Bundles object.
LISTING 3-1: Bundling JavaScript Files
void Application_Start(object sender, EventArgs e)
{
Bundle JSBundle = new Bundle("~/JSBundle");
JSBundle.Include("~/syntax/scripts/shCore.js");
JSBundle.Include("~/syntax/scripts/shBrushCSharp.js");
BundleTable.Bundles.Add(JSBundle);
}
Four constructors for the Bundle class are available:
If you review Figure 3-3, you’ll notice the size of each individual file and the combined size per file type. Likewise, take note of the params IBundleTransform[]) transforms parameter of the Bundle class. Passing a class that implements the IBundleTransform interface results in the included files being minified. Therefore, implementing minification is as simple as passing an additional parameter to the Bundle class. Listing 3-2 is an example of the minification of the JavaScript files.
LISTING 3-2: Minifying JavaScript Files
void Application_Start(object sender, EventArgs e)
{
Bundle JSBundle = new Bundle("~/JSBundle", new JsMinify());
JSBundle.Include("~/syntax/scripts/shCore.js");
JSBundle.Include("~/syntax/scripts/shBrushCSharp.js");
BundleTable.Bundles.Add(JSBundle);
}
Notice the addition of the new JsMinify(), which is a default class found within the System.Web.Optimization namespace and which implements the IBundleTransform interface as required. When the minified CSS and JavaScript bundles are referenced from the requested pages, the requests are fewer and the response times faster, as shown in Figure 3-5.
This is a nice feature that is easy to implement, and for which gains in performance are quickly realized.
Before the cloud and virtual machines, scaling a system took a lot of time. Scaling meant that you needed to order a new server, to install the operating system and to connect to the network, to install the application onto it, and, if all went well, to add the server to the web farm and start directing traffic to it. This process could take weeks if not months from start to finish. Metrics and processes were required to forecast and anticipate growth and usage of the system well in advance so that you could scale out the current environment.
The creation of virtual machines changed that a lot. Tools, such as VMware and Hyper-V, speed things up considerably. To scale, you can take a snapshot of an existing virtual machine and use that snapshot to build another virtual instance of your system. It still takes time to get it added to the network and to load user traffic onto it, but nothing like it was before virtual machines.
Windows Azure scaling has taken it to the next level. While monitoring your system, if you notice that the CPU or memory usage increases, you can click the Scale link of the Cloud Service or Web Site and increase the number of instances, as shown in Figure 3-6.
In this case, the number of virtual machines running the http://mvc-4.cloudapp.net Web Role is increased to four. After you set and save the scaling of the Web Role, four virtual machines are created, the website is published to the virtual machines, and traffic begins flowing to the new instances when they are needed. During the deployment of the virtual machines, you can track the status by clicking the Instances link and then the environment you want to scale out (for example, either Production or Staging). Figure 3-7 illustrates the transition status.
With three clicks, the website scales from one server to four. No manual configuration is required — it is all done for you automatically. When scaling Windows Azure Web Sites, you also need to take the Web Site Mode into account, as shown in Figure 3-8.
The different website modes:
When you click the “?” in the Web Site Mode section, the following information is rendered:
In the Free and Shared modes, all websites run in a multi-tenant environment and have quotas for usage of CPU, memory, and network resources. You can decide which sites you want to run in Free mode and which sites you want to run in Shared mode. Shared mode employs less stringent resource usage quotas than Free mode. The maximum number of sites you can run in Free mode may vary with your plan. When you choose Reserved mode, all your web sites run in Reserved mode on dedicated virtual machines that correspond to standard Windows Azure compute resources.
This means that when your website is in Free or Shared mode, your site runs on a server with other websites. This is the common web hosting approach and is what the product Antares delivers to web-hosting companies. When you run your website in Reserved mode, it runs on a dedicated virtual machine similar to how a Web Role works.
Scaling is no longer a difficult nor costly endeavor. For example, you can scale out the website or Web Role when you are running a marketing campaign and then scale back when the traffic reduces. This scaling back technique is complicated to perform in older scaling models because after you pay for the hardware build it and direct traffic to it, it generally stays, as do the maintenance/fixed costs. Scaling back is a new concept that Windows Azure has made available. Because of this, the costs of unnecessary extra capacity can be significantly reduced.
Using online tools, bundling, minification, and scaling can all help your application perform faster and better under load. However, these tools and techniques may not resolve all the performance problems you may encounter in an ASP.NET website. With that in mind, you can implement the following 15 tips into your ASP.NET website to improve performance results:
using (SqlCommand command =
new SqlCommand(sql,
ConnectionManager.GetConnection()))
{
command.CommandType = CommandType.Text;
count = (int)command.ExecuteScalar();
}
The following links can further help you optimize your system:
In this chapter, you learned the importance of performance in a website. Customers and visitors expect a website to be responsive and fast. If it does not meet their expectation, they will not return.
A number of tools can aid you in setting a baseline so that you can compare that baseline with performance measurements you take during a slow period. In addition, you can use tools such as ANTS and Visual Studio to gain deeper insight into which methods perform the slowest and, therefore, need more analysis and improvement.
You can implement features such as bundling, minification, and scaling to increase the performance and stability of your website in a short time. These features have a large positive impact on the website.
The next chapter has detailed exercises for implementing many of the concepts discussed in this chapter.