Chapter 2. Progressive Enhancement

Progressive enhancement is a term that often incites intense debate. For many, progressive enhancement can be summed up as “make your site work without JavaScript.” While developing a site that works without JavaScript often does fall under the umbrella of progressive enhancement, it can define a much more nuanced experience.

In Aaron Gustafson’s seminal A List Apart article “Understanding Progressive Enhancement”, he describes progressive enhancement as a peanut M&M: the peanut is the core experience, which is essential to the user. The chocolate is the features and design that take us beyond the naked peanut experience and add some much-loved flavor. Finally, the candy shell, though not necessarily needed, provides added features, such as not melting in your hand. Often this example uses HTML as the peanut, CSS as the chocolate, and JavaScript as the candy shell.

In today’s web application landscape it may be an oversimplification to consider progressive enhancement as simply “works without JavaScript.” In fact, many of the rich interactions and immersive experiences that have come to define the modern Web certainly require JavaScript. For progressive enhancement to be considered an ethical issue in web development, we must tie it back to user needs. Progressive enhancement is about defining what users need to get from your website and ensuring that it is always delivered to them, in a way that will work regardless of network conditions, device, or browser.

I prefer Jeremy Keith’s view of progressive enhancement as a “process” rather than a specific technique or set of technologies. By Keith’s definition, this process looks like:

  1. Identify the core functionality
  2. Make that functionality available using the simplest technology
  3. Enhance!

As developers, it is our job to define the core functionality of our applications and establish what enhancements entail. This allows us to develop a baseline to build from—but the baseline for any given project may be different.

In his 2012 article “Stumbling on the Escalator”, Christian Heilmann appropriated a Mitch Hedberg comedy bit about escalators for progressive enhancement:

An escalator can never break—it can only become stairs. You would never see an “Escalator Temporarily Out Of Order” sign, just “Escalator Temporarily Stairs. Sorry for the convenience. We apologize for the fact that you can still get up there.”

As a person who has spent a lot of time in Washington DC’s Metro system, I can really appreciate this analogy. Fortunately, when an escalator is out I am not trapped underground, but instead can huff up the now-stairs to the street.

Often, when beginning a project, I am presented with a set of business requirements or a beautiful design. From these, it can be easy to see the end goal, but skip the baseline experience. If, in the case of the escalator, my requirement was to “build a transportation system that will allow Metro riders to travel from the terminal to the street,” my initial reaction may be to create only an elevator. You can imagine how this might become problematic.

Developing web apps works in much the same way. If we only consider the end goal, we run the risk of leaving our users stranded. By focusing on and providing a solid baseline for our users, we set ourselves up for success in many other aspects of ethical web development, such as accessibility and performance.

Defining Core Functionality

If progressive enhancement is the process of defining a core functionality and building up from there, how do we define that initial baseline? The goal is to consider the bare minimum that a user requires to use our application. Once we have defined this, we can layer on additional style and functionality. For some applications, this may be a completely JavaScript-free version of the experience, while for others it may be a less fully featured version; for still others it may be providing some server-rendered content on the initial page load only.

The key is to think of progressive enhancement not as a binary option, but instead as a range, determining what is the best decision for users. In this way, progressive enhancement is a gradient rather than an either/or option (see Figure 2-1). Our responsibility is to decide where on this gradient our particular application falls.

Figure 2-1. Progressive enhancement is a gradient of choices

I’d encourage you to take a few minutes and consider what the core functionality might look like for a few different types of websites and applications, including the following:

  • News website
  • Social network (write text posts and read your newsfeed)
  • Image sharing website
  • Web chat application
  • Video chat application

Identify the primary goal of each type of site and determine the minimum amount of technology needed to implement it. To take it a step further, write some markup or pseudocode explaining how you might implement those baselines and features.

When working on your own applications, try to perform the same exercise. First, determine the core functionality for your users and build the application from there. This programmatic approach also pairs well with the Agile approach to software development, where the goal is to deliver working software at the end of each development sprint. If we first deliver a core experience, we can iteratively build upon that experience while continuing to deliver value.

Progressive Enhancement Is Still Relevant

Some may question how relevant progressive enhancement is today, when a small percentage of users browse the Web with JavaScript disabled.1 This places the focus too heavily on progressive enhancement as a JavaScript-free version of a site. In fact, some types of applications, such as video chat, absolutely require some form of JavaScript to work in the browser. The goal of progressive enhancement is to provide the absolute minimum for a working product and ensure that it is delivered to each user’s browser.

Ideally, this working minimum product is simply HTML without any external resources such as images, video, CSS, or JavaScript. When users’ browsers request our site, we can be certain that they will receive HTML (or nothing at all). By creating a working version of our application, even with a minimal experience, using the smallest number of assets, we can be sure that the user is able to access our content in some form.

The Government Digital Service team (GDS) at GOV.UK provides a number of scenarios where asset requests may fail:

  • Temporary network errors
  • DNS lookup failures
  • The server that the resource is found on could be overloaded or down, and fail to respond in time or at all
  • Large institutions (e.g., banks and financial institutions, some government departments) having corporate firewalls that block, remove, or alter content from the Internet
  • Mobile network providers resampling images and altering content to make load times faster and reduce bandwidth consumed
  • Antivirus and personal firewall software that will alter and/or block content

Additionally, in the blog post “How Many People Are Missing Out on JavaScript Enhancement?”, the GDS added the following:

  • Existing JavaScript errors in the browser (i.e., from browser add-ons, toolbars, etc.)
  • Page being left between requesting the base image and the script/noscript image
  • Browsers that preload pages they incorrectly predict you will visit

While those of us developing for the Web often have nice hardware and speedy web connections, that may not be true for many of our potential users. Those in developing or rural areas may have limited or outdated devices and slow connections. In 2015, the Facebook development team initiated a program called 2G Tuesdays, which allows them to experience their applications as though they are being served over these slower networks. I would encourage you to do the same.

Today’s browser development tools allow us to mimic network conditions, experiencing what it is like for these users to access our sites (see Figure 2-2). We will explore the topic of web performance in greater detail in an upcoming title in this series.

Figure 2-2. The Google Chrome browser includes network connectivity simulation tools

Though you may have never used, tested an application with, or even heard of it, the Opera Mini browser currently has over 300 million users worldwide.2 The browser is designed to greatly decrease mobile bandwidth usage by routing pages through Opera’s servers and optimizing them. To do this, Opera Mini only supports a subset of modern web standards. Here are a few of the things that are unsupported by Opera Mini:

  • Web fonts (which also means no icon fonts)
  • HTML5 structural elements and form features
  • Text-decoration styling
  • Video and audio elements

The site Opera Mini Tips collects the full set of modern web standards that are not supported in the browser. As you can imagine, without a progressive enhancement strategy, our sites may be completely inaccessible for all 300+ million Opera Mini users.

When developing an application exclusively for users who are likely in an urban area with strong Internet speeds and nice hardware, we may feel as if we are exempt from concerning ourselves with connection issues. Recently, developer Jake Archibald coined the termed Lie-Fi. This is a connection where our mobile device seems to be connected to WiFi, but sites are slow to load as they feebly connect to our struggling signal.

In addition to the conditions just described, there may be external factors at play. In 2014, the UK’s Sky broadband accidentally blocked the jQuery CDN for a brief amount of time, presumably leaving many users perplexed with broken websites. More recently, the ability to compose and publish a tweet became unavailable in the Twitter web client. This was caused by a regular expression that was being served by a CDN without the proper character encoding. Though the JavaScript likely worked in all local and quality assurance testing, once it was available on the Web it disabled one of the site’s most critical features.

Run Your Own Experiment

GDS was curious to see how many users were missing out on JavaScript resources when accessing its site. To test this, the team ran an experiment by adding three images to a page:

  • An image that all browsers should request
  • An image that would only be requested via JavaScript
  • An image that only browsers with JavaScript disabled would request

The results of this experiment are really fascinating. Though only a fraction of a percentage of users requested the JavaScript-disabled image, those that failed to load the image requested via JavaScript were significantly higher.

If possible, I’d encourage you and your teams to conduct a similar experiment. This allows us to base the decision to support (or not support) Javascript-disabled users with data, rather than assumptions or world averages.

To run this experiment on your own site, first create three empty GIF files named base-js.gifwith-js.gif, and without-js.gif. Then you can use the following snippet (adapted from GOV.UK’s experiment) in your HTML:

<img src="base-js.gif" alt="" role="presentation"
  style="position: absolute; left: -9999em; height: 0px; 
         width: 0px;">
<script type="text/javascript">
  (function(){
    var a = document.createElement("img");
    a.src = "with-js.gif";
    a.alt = "";
    a.role = "presentation";
    a.style.position = "absolute";
    a.style.left = "-9999em";
    a.style.height = "0px";
    a.style.width = "0px";
    document.getElementById("wrapper").appendChild(a);
  })();
</script>
<noscript>
   <img src="without-js.gif" alt="" role="presentation"
    style="position: absolute; left: -9999em; height: 0px; 
           width: 0px;">
</noscript>

Web Beacons

This approach, used by GOV.UK, is a recommendation to use web beacons, which may have some privacy implications. Web beacons are typically hidden pieces of content used to check if a user has accessed a piece of content, and are common in email marketing campaigns. Privacy issues such as this will be discussed in an upcoming title in the series.

How Can We Approach Progressive Enhancement Today?

Recently, I was talking with my friend and colleague Scott Cranfill about a progressive enhancement strategy for a project he was working on. This project was mostly static content, but also included an auto loan calculator. When discussing how he might approach this from a progressive enhancement angle, he mentioned that he thought the default markup should simply include the formula that the calculator uses. Once the page’s assets load, a fully functional dynamic calculator will display. This means that nearly every user will only see and interact with the calculator, but should something go wrong, a user will still be presented with something that is potentially useful. I loved this pragmatic approach. It wasn’t about “making it work without JavaScript,” but instead about making it work for everyone.

In the world of modern, JavaScript-driven web applications, there are still several practical approaches we can take to build progressively enhanced sites. These approaches can be simple or leverage exciting web technology buzzwords such as isomorphic JavaScript or progressive web applications. Because progressive enhancement is not a one-size-fits-all approach, you may want to evaluate these and choose the one that best works for your project.

Let’s take a look at a few of these options and how they may be used to build the best possible experience for a user.

Perhaps the simplest and most progressive is to completely avoid a JavaScript-dependent first-page render. By rendering all of the necessary content on the server, we can ensure that users receive a usable page, even if only our HTML makes it to their browser. The key here is to focus on what is necessary. There may be additional JavaScript-required functionality, but if it isn’t necessary we can allow it to quietly fail in the background or present the user with different content.

If you choose to serve a library from a CDN, you should provide a local fallback as well, as recommended by HTML5 Boilerplate’s. This allows us to leverage the benefits of the CDN while ensuring that the user has the opportunity to downloads the scripts should there be an issue with the CDN, such as unexpected down time or being blocked by an ISP or third-party browser add-on. Here’s the code you’ll need to use:

<script src="https://code.jquery.com/jquery-1.12.0.min.js">
</script>
<script>
  window.jQuery || document.write(
    '<script src="js/vendor/jquery.min.js"></script>'
  )
</script>

Another option, or one that may paired with the previous, is to sniff out outdated browsers and avoid serving JavaScript to those browsers. We can continue to serve our core content and functionality to those browsers (it was progressively enhanced, after all!), but offer a significantly simpler experience.

To sniff out older browsers, we can use a technique demonstrated by Jake Archibald from his 2015 Nordic.js talk. This checks for the availability of the Page Visibility JavaScript API, which is only available in modern browsers. If Page Visibility is unavailable, the code exits without attempting to execute. You can use the following code to check for the Page Visibility API:

(function() {
  if (!('visibilityState' in document)) {
    return false;
  }

  // rest of your code
}());

IIFE

The preceding example is wrapped in an immediately invoked function expression (IIFE), which may look a bit odd if you’re not a JavaScript developer. This ensures that the code executes immediately while avoiding the pollution of the global scope. If you are interested in learning more, see Ben Alman’s detailed post about IIFEs.

For JavaScript-dependent applications, we could render the landing page as HTML on the server while prefetching the JavaScript for the rest of the application:

<link rel="prefetch" href="app.js">

This approach gives our users the opportunity to download and cache the application’s JavaScript, without impacting the performance or requirement on a mostly static page. Soon browsers will begin implementing the Preload specification, which will be similar to prefetch, but enable additional browser features.

In action, preload looks similar to prefetch:

<link rel="preload" href="app.js" as="script"> 

Here are some prefetch and preload resources:

You may be thinking, “But I want to build modern JavaScript web applications.” Certainly these techniques feel out of sync with the approaches of some of the popular JavaScript frameworks, but recently we’ve seen the most popular web application approaches trend back toward a progressive enhancement model.

Isomorphic or universal JavaScript is a technique that allows a developer to pair server-side and client-side JavaScript into a “write once, run everywhere” approach. This technique means that the initial application will render on the server, using Node.js, and then run in the browser. When building a progressively enhanced isomorphic application we can start by building our server-rendered version of the applications and layer on the isomorphic approach.

A similar approach was taken by the team behind the recent Google+ redesign:

With server-side rendering we make sure that the user can begin reading as soon as the HTML is loaded, and no JavaScript needs to run in order to update the contents of the page. Once the page is loaded and the user clicks on a link, we do not want to perform a full round-trip to render everything again. This is where client-side rendering becomes important—we just need to fetch the data and the templates, and render the new page on the client. This involves lots of tradeoffs; so we used a framework that makes server-side and client-side rendering easy without the downside of having to implement everything twice—on the server and on the client.

Isomorphic JavaScript

Though my description may be oversimplified, isomorphic JavaScript is an exciting approach for developers and teams who are using server-side JavaScript. To learn more about isomorphic JavaScript, I recommend taking a look at the following resources:

If a fully isomorphic JavaScript approach is overkill for an application, Henrik Joreteg has coined the term lazymorphic applications. A lazymorphic application is simply one where the developer pre-renders as much of the application as possible as static files at build time. Using this approach, we can choose what we render, making something useful for the user while withholding JavaScript-dependent features.

Lastly, the term progressive web apps has recently taken hold. Rather than specific technology, this term has come to encompass several interrelated techniques and approaches to web development. This is an approach that pairs nicely with all of those listed earlier.

In his article “Progressive Web Apps: Escaping Tabs Without Losing Our Soul”, Alex Russell described progressive web applications in this way:

  • Responsive
  • Connectivity independent
  • App-like interactions
  • Fresh
  • Safe
  • Discoverable
  • Re-engageable
  • Installable
  • Linkable

The progressive web application approach just described is well aligned to an ethical web application experience by focusing on delivering an application experience that works for every user.

Progressive Web Applications

Though rooted in several technologies, the overarching concept of progressive web applications is just starting to take hold. Here are a few of the resources that I’ve found most useful for getting started:

In Summary

There are a variety of techniques and approaches that allow us to build progressively enhanced modern websites and applications. This chapter has outlined a few of these options. By beginning with the core functionality, we are able to ensure that our application works for the maximum number of people. This provides us with a baseline to provide working software for all users in a range of situations.

From an ethical standpoint, progressive enhancement provides several benefits to our users. By following a progressive enhancement process, we can be sure that we are building our applications in a way that allows them to be available for as many users as possible, regardless of device, connection, or browser.

1 In 2010, Yahoo conducted what is considered the definitive study of JavaScript usage, finding that the percentage of users with JavaScript disabled ranged from 0.26% to 2.06%, depending on the country of origin. Sadly, these statistics are long out of date. In 2013, GOV.UK’s GDS team did a similar study and found that 1.1% of its users were not receiving JavaScript. The German site darwe.de analyzes JavaScript enablement in real time and shows a much larger percentage of users with JavaScript disabled visiting its site.

2 Owen Williams, “The Unknown Browser with 300 Million Users That’s Breaking Your Site,” TNW, http://thenextweb.com/dd/2015/12/24/the-unknown-browser-with-300-million-users-thatsbreaking-your-site/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset