Chapter 12

Launching Your App

At this point, you're just about ready to launch your app. However, the development process may have introduced performance issues. Now that the majority of the code has been written, it's an excellent time to step back and assess the overall performance of the app.

In this chapter, you discover performance testing techniques that complement the Test Driven Development (TDD) approach you established in Chapter 1. These tests help you identify problem areas and create a solid optimization plan.

Next, you learn the differences between perceived and actual performance, as well as how to compress files properly for the smallest filesizes. Then you hear about techniques for optimizing animation. These include dipping into CSS3 animation techniques, as well as HTML5's cutting edge requestAnimationFrame API. Afterwards, you discover some more general optimization approaches such as reducing tasks and minimizing browser reflows. Finally, you explore different deployment options, such as Content Delivery Networks (CDNs) and scalable server clouds.

Performance Checklist

One of the most important features of any app is how quickly it runs. In the past, optimization efforts were focused primarily on the server: improving latency issues and increasing capabilities to handle more concurrent requests. However, in recent years, developers have begun to focus much more on client-side performance, which comes primarily down to JavaScript. Although you can throw money at server-side performance (by purchasing better servers), you really can't do anything about client-side performance, other than optimize the code. That's because you have no control over the devices used to access your app. You may think that this becomes less of an issue because computers are constantly improving, but as more users move to browsing the web on mobile devices, your app needs to be compatible with much slower processors. Additionally, even on desktop computers, users have higher expectations for the level of interactivity an app should provide.

Where to Focus

That said, it doesn't make much sense to optimize all the code in your app. Sure, you should make efforts to optimize code as you write it, but it's not really worthwhile to go in afterward and refactor every line. Instead, you should test the code and determine where performance bottlenecks arise.

Testing Performance

If you've set up unit tests with QUnit, you should already be getting data about the test execution time. But there's another way to test performance when you want to compare two different approaches: the benchmark testing at http://jsperf.com.

jsPerf makes it easy to compare the performance of short scripts and to compare them across different browsers, as you can see in Figure 12-1.

9781118524404-fg1201.tif

Figure 12-1 jsPerf caches all the browsers you use to run the benchmarks, and provides them for easy comparison.

Although jsPerf is great for testing individual micro-optimizations, it's not very useful for testing the speed of an entire app. For that, use either QUnit or the profiler tool in Chrome or Firefox.

Another useful tool for performance testing is the profiler in the WebKit developer tools. You can use the JavaScript profiler to gather data as to what parts of your code are executing most and what parts are demanding more resources, as you can see in Figure 12-2.

Repeated Code

Beyond testing, it's also a good idea to focus on optimizing code that is executed repeatedly. The more often a function runs, the larger the gains will be when optimizing. So take a look at your loops and any reused functions first.

You can also do yourself a favor ahead of time and follow the practice referred to as Don't Repeat Yourself (DRY) by abstracting the commonly used parts into their own functions so that they can be shared throughout your app. The benefit of doing this is that your code is easier to maintain and test in addition to reducing the total filesize.

9781118524404-fg1202.eps

Figure 12-2: The JavaScript profiler from WebKit's built-in developer tools.

Actual Versus Perceived Performance

Finally, note that there are two different types of performance: actual and perceived. It ultimately doesn't matter how fast a benchmark test thinks your app is running; what matters is the user's perception.

The classic case for perceived performance is offloading certain tasks for later, so that the user sees something happen on the page as soon as possible. For instance, say you have a JavaScript chart that is really resource-intensive. It doesn't make sense to block the rest of the page while the chart loads. Instead, first load the entire page with a “loading” gif and then load the chart at the end.

Even though the script takes the same amount of time to load (or even slightly longer), users will perceive better performance because they will be able to interact with the other parts of the page while the slower component finishes loading.

The perceived performance improvements in this example don't have to stop with the chart—you can set up the page to load the bare minimum you need to display the smallest amount of relevant data, then load the rest on demand.

Asset Management

When it comes to optimization, you can see the largest gains in the way you're serving assets like .js files. You can use a number of techniques to reduce the size of these files, and thereby reduce the bandwidth requirements of your site or app.

Sure, users have more bandwidth and download files faster these days. But faster speeds also lead to increased expectations, so you had better do everything you can to reduce the size of files. After all, many mobile networks are still comparatively slow, and you can never be sure how a user is connecting to your app.

In this section, I cover a three-pronged approach: minification, gzip compression, and hosting. These techniques can reduce the size of files dramatically; it's not uncommon to see reductions of 90 percent or more when using these methods. Chances are that you're already doing some of this, but making sure you're following all of these guidelines will ensure that your JavaScript will load as quickly as possible.

Minification

This one is pretty much a no-brainer. Before you go to production, make sure you run your code through a minifier or packer to strip out any whitespace and comments, and reduce the characters in local variables. Depending on how many of these characters can be removed, you will see substantial gains when minifying—for instance, a 100K file shrinking to 30K.

But keep in mind that not all minifiers are the same. They handle things a bit differently and produce different sized files. My personal favorite is the Google Closure Compiler (http://code.google.com/p/closure-compiler), because it consistently provides some of the smallest minified files. For example, see this comparison: http://coderjournal.com/2010/01/yahoo-yui-compressor-vs-microsoft-ajax-minifier-vs-google-closure-compiler/.

You can download the compiler and run it in your computer's terminal, or use the online version here: http://closure-compiler.appspot.com. If you're running Node, take a look at the Uber-Compiler module (which also compiles LESS): https://github.com/kennberg/node-uber-compiler.

You should also minify your CSS and consider minifying markup as well.

Gzip Compression

In addition to minifying, you should also gzip files. Gzipping will further reduce the size of files, with substantial gains. Gzipped files are typically a third of the size of their non-compressed counterparts. For instance, take a look at jQuery 2.0. The uncompressed, non-minified core is 240K. That gets minified down to 83K, which is further compressed with gzip to under 30K. That's an 88 percent difference from the original!

Best of all, gzipping can be done dynamically on the server. If you're using Node, simply install one of these compression modules: https://github.com/joyent/node/wiki/modules#wiki-compression.

Or if you're on an Apache server, gzipping can be done from the .htaccess file:

# compress text, html, javascript, css, xml:

AddOutputFilterByType DEFLATE text/plain

AddOutputFilterByType DEFLATE text/html

AddOutputFilterByType DEFLATE text/xml

AddOutputFilterByType DEFLATE text/css

AddOutputFilterByType DEFLATE application/xml

AddOutputFilterByType DEFLATE application/xhtml+xml

AddOutputFilterByType DEFLATE application/rss+xml

AddOutputFilterByType DEFLATE application/javascript

AddOutputFilterByType DEFLATE application/x-javascript

Also gzip .css and .html files (or output gzipped markup from a dynamic page).

Hosting

After you've prepared the .js file by minifying and gzipping, it's time to host it. But rather than host it on your main server, it's best to upload it to a CDN, for a number of reasons. I discuss these advantages later in this chapter in the Deployment section.

Animation Optimization

Another place you can see substantial gains in performance is within your app's animations. Animation is one of the heaviest processes for the browser because it has to draw and redraw the page repeatedly for each frame of the animation.

Additionally, performance is particularly noticeable in animation. That's because the better an animation performs, the more frames the browser can render per second. Poorly optimized animations look choppy because the browser has to drop too many frames, whereas high performance animations appear smooth and seamless to viewers.

Before you start optimizing an animation, decide whether you need it at all. Sure, animation can provide a better user experience, but bad performance will outweigh any of that. So pick your battles when it comes to animation and animate only important areas. Remember: Too much animation can be a bad experience because users have to wait for each animation to complete.

CSS3 Animation

CSS3 introduces transition and keyframe animations, both of which can be used as an alternative to JavaScript. Because these animations can be optimized by the browser and take advantage of hardware acceleration, it's a good idea to do this wherever possible. As a result, in capable devices, the graphics processing unit (GPU) rather than the central processing unit (CPU) will be used to render the animation. The GPU is not only better suited for rendering, but it also frees up the CPU to focus on other tasks.

Combining CSS3 Animation with JavaScript

Using CSS3 animation doesn't mean you have to handle everything in the stylesheet. You can still start the animation with JavaScript; simply add a transition to a class name and then apply that class using JS.

First, start with the CSS:

.my-element {

  position: absolute;

  top: 0;

  left: 0;

  -webkit-transition: left 500ms ease;

     -moz-transition: left 500ms ease;

          transition: left 500ms ease;

}

.my-element.slide-left {

  left: 200px;

}

Next, attach the slide-left class to your element with jQuery:

$('.my-element').addClass('slide-left'),

When this class is applied, it animates the element 200px to the left over the course of 500 milliseconds.

Downsides of CSS3 Animation

Unfortunately, CSS3 animation can be more difficult to work with than the JavaScript techniques you're already used to. For example, jQuery animations have an intuitive callback function that is called whenever the animation completes. Although you can accomplish the same thing using CSS3's transitionend handler, this event fires whenever any transition completes, and it can be difficult to discern which transition triggered it. To make matters worse, it fires multiple times if you are transitioning multiple properties (for example, both height and width on the same transition).

More importantly, CSS3 animation creates an organizational issue because animation is often more of a scripting task than a styling one, so it doesn't make sense to have that code in the stylesheet. Sometimes it's not a big deal; for instance, if you want to transition the color of a link on hover, that animation is just a styling thing. But once you start using JavaScript to apply CSS3 animation and then try to handle the transitionend event in the JS, you'll have pieces of the same script running from completely different locations. That doesn't mean you should avoid this practice altogether. Sometimes you need to use CSS3 animation; for example, on underpowered mobile devices, it can be a lifesaver. But you also need to understand the drawbacks to this development pattern.

Finally, keep in mind that browser support is still lacking for CSS3 animation. Fortunately, as of IE10, the latest A-list browsers all support transition and keyframe animation (with the exception of Opera Mini). However, earlier versions of IE don't support it, and that still represents a considerable market share at the time of this writing. With transitions, it's typically not a major issue, since they simply fall back to immediate, non-animated style changes. On the other hand, keyframe animation is a different story. Of course if any of these animations are mission-critical, you can always write a JavaScript fallback or use a polyfill.

Request Animation Frame

Alternatively, if you'd like to avoid the rat's nest of interfacing with CSS3 transitions, you can get a best-of-both-worlds solution using the HTML5 requestAnimationFrame API. This API provides a more performance-friendly way for the browser to render each frame of an animation.

Why You Should Use Request Animation Frame

Typically when animating in JavaScript, you use a timer loop like setTimeout to make CSS changes every few milliseconds. But that isn't necessarily the best approach because it's impossible to figure out the best frame rate to use on each user's machine. Make changes too slowly, and your animation appears choppier than it has to. Make changes too quickly, and the browser chokes on extra rendering tasks that it ultimately won't perform (because the timer will have moved on to the next frame of the animation).

Additionally, even if the CPU can handle the extra frames, they have to be timed to match the refresh rate of the user's screen—the rate at which the screen displays a new frame. Different devices have different refresh rates. The standard is 60 Hz, but a laptop in power-saving mode might use a refresh rate of 50 Hz, and some desktop monitors are 70 Hz.

That's where requestAnimationFrame comes in. It provides a better way to animate in the browser, by providing an API to render frames according to the capabilities of the device, rather than setting them to an arbitrary timer. Then the browser can render frames as quickly as it is able, potentially dropping that rate to better align with the screen's refresh rate.

You might think that it doesn't make much difference, but our eyes are finely tuned to motion and notice even the slightest change in frame rate. Thus it's better to be consistently lower in your frame rate than to drop frames at random. The former appears smooth, and the latter appears choppy.

Request Animation Frame Polyfill

Before you get started with requestAnimationFrame, make sure you're using this polyfill from Paul Irish:

(function() {

  var lastTime = 0;

  var vendors = ['ms', 'moz', 'webkit', 'o'];

  for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) {

    window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame'];

    window.cancelAnimationFrame =

      window[vendors[x]+'CancelAnimationFrame'] || window[vendors[x]+'CancelRequestAnimationFrame'];

  }

  if (!window.requestAnimationFrame)

    window.requestAnimationFrame = function(callback, element) {

      var currTime = new Date().getTime();

      var timeToCall = Math.max(0, 16 - (currTime - lastTime));

      var id = window.setTimeout(function() { callback(currTime + timeToCall); },

        timeToCall);

      lastTime = currTime + timeToCall;

      return id;

    };

  if (!window.cancelAnimationFrame)

    window.cancelAnimationFrame = function(id) {

      clearTimeout(id);

    };

}());

This polyfill aligns the different browser extensions and also provides a fallback for non-supportive browsers using timeouts. You can read more about it here: http://paulirish.com/2011/requestanimationframe-for-smart-animating.

How to Use Request Animation Frame

After you include the polyfill, you're ready to get started. First, create some markup to work with:

<div id="my-element">Click to start</div>

And add some basic styling:

#my-element {

  position: absolute;

  left: 0;

  width: 200px;

  height: 200px;

  padding: 1em;

  background: tomato;

  color: #FFF;

  font-size: 2em;

  text-align: center;

}

Now here's the JavaScript:

var elem = document.getElementById('my-element'),

    startTime = null,

    endPos = 500, // in pixels

    duration = 2000; // in milliseconds

function render(time) {

  if (time === undefined) {

    time = newDate().getTime()

  }

  if (startTime === null) {

    startTime = time;

  }

  elem.style.left = ((time - startTime) / duration * endPos % endPos) + 'px';

}

elem.onclick = function() {

  (function animationLoop(){

    render();

    requestAnimationFrame(animationLoop, elem);

  })();

};

This script adds a click handler to the element, which will animate it 500 pixels to the right over the course of two seconds and then loop around to the beginning.

First, the render() function compares the current time against the start time to figure out the right position to render in the animation, similar to way you might handle it with a standard JavaScript timer.

Next, the animationLoop() function renders the current frame of the animation. It then calls requestAnimationFrame(), passing itself and the element you want to animate, which sets up a loop that renders a new frame in the element's animation whenever the browser is ready to do so.

Running this script in your browser results in a decidedly smooth animation, even if you increase the speed substantially.

Another advantage of requestAnimationFrame is that it stops the animation when the current tab loses focus, conserving resources like power consumption.

Doing Less

You can further optimize other parts of your app by following the mantra of “doing less.” In this sense, optimization is really very simple. The more the browser has to do, the slower that work will be completed. Therefore, if you reduce the number of tasks, you improve speed. One easy way to reduce tasks is by caching anything you're reusing.

This technique is especially important when it comes to jQuery DOM references, since CSS selectors can be particularly resource-intensive even with the super-fast Sizzle engine. For example, you should avoid selecting DOM elements within a loop, unless the loop somehow changes what those selectors reference.

Take a look at the jsPerf test here: http://jsperf.com/caching-jquery-dom-refs/2 (shown in Figure 12-3). As you can see, the cached selector is considerably faster than the selector that is called repeatedly in the loop.

9781118524404-fg1203.tif

Figure 12-3: The cached reference to $(‘h1') is much faster in Chrome.

Avoiding Reflows

Additionally, you can optimize client-side JavaScript by minimizing the amount of reflows on the page. A reflow occurs whenever the layout of an element changes. For instance, if you make a floated element wider, it pushes other floated elements down to the next line and then pushes the following elements downward in turn.

To get an idea of how reflows work, take a look at this visualization of reflows in action on a Gecko browser: http://youtube/ZTnIxIA5KGw.

The best way to avoid reflows is to avoid unnecessary manipulation of the DOM. Whenever possible, batch any DOM changes and insert them at once.

See this jsPerf test, which appends images to the page either one at a time or all at once: http://jsperf.com/individual-vs-batch-jquery-insertion/6. As shown in Figure 12-4, the batched changes run considerably faster.

9781118524404-fg1204.tif

Figure 12-4 Batching ten Nyan cat images is faster than appending them one at a time.

Deployment

After you've tested your app thoroughly and are happy with both the functionality and performance, it's time to deploy.

Deploying Static Assets on a CDN

There are a variety of performance advantages to deploying your app's static scripts to a CDN such as Amazon Simple Storage Service (S3).

Advantages of CDNs

First, CDNs distribute your files throughout the world, which reduces latency and increases availability because the files live closer to wherever users are requesting them. Remember, the Internet isn't magic; files still have to come from and go to somewhere—a trip that is limited by the speed of the electrical signal.

Second, CDNs are optimized to deliver static assets. For instance, on your site, you're probably using cookies in one way or another, if only for analytics. Every time a user requests a file from your server—be it HTML, JS, CSS, or an image—this cookie data is passed as part of the header.

Although cookies are relatively small (usually less than 1K), this extra data can really add up when you consider that it's being sent along with every single request. For that reason it's a good idea to host all static assets (JS, CSS, and images) on a CDN because CDNs don't pass any cookies along with the header. Third, CDNs provide aggressive HTTP caching capabilities, which reduce latency and increase output, allowing the CDN to handle massive traffic spikes. That, combined with a high level of redundancy contributes to an availability level that is difficult to match on your own server.

Disadvantages of CDNs

The only downside with using a CDN is that it introduces a new point of failure, which means that your site won't work if either your server or the CDN goes down. But you shouldn't worry about this issue too much. Most CDNs provide a high degree of redundancy, which ensures an extremely high uptime. I'm not saying CDNs will never go down, but that risk is far outweighed by the performance benefits it provides.

Just bear in mind that it's not unheard of for CDNs to go down, even popular CDNs that are considered extremely stable. For example, Amazon Web Service (AWS) has gone down in the past, which is always a highly publicized event because of the number of sites that depend on it. In October 2012, AWS went down, bringing down major sites like Reddit and Netflix with it.

Deploying Node Server on EC2

In addition to deploying the static front-end resources to a CDN, you can also deploy your Node implementation to a server cloud such as Amazon's Elastic Computer Cloud (EC2). EC2 servers are great because they are scalable, meaning that you can ramp up resources when you need them, and avoid paying for them when you don't. Thus you get a best-of-both-worlds solution: one that is cheap during slow periods but can handle just about any amount of load you throw at it. Of course, the more resources you use, the more you spend on hosting. But don't worry; you can set up limits that you're comfortable with.

To learn more about how to deploy your Node server to EC2, visit http://rsms.me/2011/03/23/ec2-wep-app-template.html.

The Launch

After all is said and done, you're finally ready to launch. So pat yourself on the back, take a break, grab a drink. However, don't get too comfortable, because issues always occur. Fires have to be put out, extreme edge cases crop up, and users won't understand certain features. But if you've done your due diligence pre-launch, you'll be able to limit the issues that come up afterward. No launch is perfect, but you'd better try to get close.

Hopefully, you haven't made the common release mistake of including too many features. From a business perspective, features are easier to add than they are to eliminate. You can always add features as users request them. But if you start with too many, you'll end up with a two-fold problem. First, your app will be exponentially more complicated and therefore more susceptible to bugs. Second, you'll end up supporting underused features forever.

I hope you discovered some new techniques in this book, but more importantly, I hope the book got you excited about JavaScript development. There's so much fun stuff going on in JavaScript, and it's only the beginning of this ever-evolving language.

You can always find new libraries to streamline new aspects of your development, as well as all the new DOM APIs that are increasing the capabilities of what you can achieve with JavaScript. Moreover, if you can't find a browser-level API for a feature you want, you can bet there will be one at some point—after all, you can suggest one yourself.

So keep learning and keep coding!

Additional Resources

Performance Testing

jsPerf. “JavaScript Performance Playground”: http://jsperf.com

WebKit's JavaScript Profiler Explained: http://fuelyourcoding.com/webkits-javascript-profiler-explained/

File Compression

Online Closure Compiler Service: http://closure-compiler.appspot.com

Github. Kennberg: Node Uber-Compiler: https://github.com/kennberg/node-uber-compiler

Other Node Compression Modules: https://github.com/joyent/node/wiki/modules#wiki-compression

Compression Comparison: http://coderjournal.com/2010/01/yahoo-yui-compressor-vs-microsoft-ajax-minifier-vs-google-closure-compiler/

Animation Performance

Jank Busting for Better Rendering Performance: www.html5rocks.com/en/tutorials/speed/rendering

requestAnimationFrame for Smart Animating: http://paulirish.com/2011/requestanimationframe-for-smart-animating

CSS-Tricks. “A Tale of Animation Performance”: http://css-tricks.com/tale-of-animation-performance

General Performance

Zakas, Nicholas C. High Performance JavaScript. (O'Reilly Media, Inc., 2010)

Smashing Magazine. “Writing Fast, Memory-Efficient JavaScript”: http://coding.smashingmagazine.com/2012/11/05/writing-fast-memory-efficient-javascript

HTML5 Rocks. Chris Wilson. “Performance Tips for JavaScript in V8”: www.html5rocks.com/en/tutorials/speed/v8

SlideShare Inc. Jake Archibald. “JavaScript—Optimising Where It Hurts”: www.slideshare.net/jaffathecake/optimising-where-it-hurts-jake-archibald

SlideShare Inc. Steve Souders. “JavaScript Performance (at SFJS)”: www.slideshare.net/souders/javascript-performance-at-sfjs

YouTube. “Gecko Reflow Visualization”: http://youtube/ZTnIxIA5KGw

Deployment

Amazon S3: http://aws.amazon.com/s3

Amazon EC2: http://aws.amazon.com/ec2

Amazon AWS Goes Down Again, Takes Reddit with It: www.forbes.com/sites/kellyclay/2012/10/22/amazon-aws-goes-down-again-takes-reddit-with-it

A Template for Setting Up Node.js-Backed Web Apps on EC2: http://rsms.me/2011/03/23/ec2-wep-app-template.html

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset