CHAPTER

1

Ajax Defined

Chapter Objectives

• Give a brief history of the Internet

• Describe basic web architecture, including URLs and HTTP

• Discuss how user interaction on the Web has evolved

• Discuss what Ajax is and how it is important

You may not yet know exactly what Ajax is or the technologies involved, but you have probably already used websites that are built on Ajax. Many of the most popular sites on the Internet use Ajax, including Google Maps (http://maps.google.com), Yahoo! (http://www.yahoo.com), Facebook (http://www.facebook.com), Flickr (http://flickr.com), and Amazon.com's A9 search engine (http://a9.com). The Internet has undergone tremendous change from its beginnings as a means for scientists to exchange research documents to a platform for dynamic, distributed applications. The latest evolution has brought the user experience of desktop applications to the Web—made possible by Ajax. This book teaches you the basic skills you need to develop dynamic web applications that provide the user a desktop application-like experience. But first, we will cover a little history.

1.1 History Lesson

 

The Internet and the World Wide Web (WWW), sometimes collectively referred to as the Web, have revolutionized the way that companies conduct business and even the way that humans communicate. Today, you can buy nearly anything on the Web, you can manage all your financial accounts on the Web, you can watch TV programs and movies on the Web, companies readily conduct critical business meetings over the Web, greater portions of the population get their daily news and information from the Web, and many humans would rather communicate via email or instant messaging than talk on the phone.

The Internet is a global network of computer networks that join together millions of government, university, and private computers. This network provides a mechanism for communication where any type of data (text, images, video, etc.) can be exchanged between linked computers. These computers can be physically located on opposite ends of the globe, yet the data can be exchanged in a matter of seconds. Although often used interchangeably, the terms “Internet” and “WWW” are different. The Internet is the worldwide network of computers (and other devices such as cell phones), but the WWW refers to all the information sources that a web browser can access, which includes all the global publicly available websites plus FTP (File Transfer Protocol) sites, USENET newsgroups, etc. Email is not considered to be part of the WWW but is a technology that is made possible by the Internet.

The Web had its beginnings in the early 1960s when some visionaries saw great potential value in allowing computers to share information on research and development in scientific and military fields. In 1962, Joseph Carl Robnett Licklider at the Massachusetts Institute of Technology (MIT) first proposed a global network of computers. Later that year he started working at the Defense Advanced Research Projects Agency (DARPA), then called the Advanced Research Projects Agency (ARPA), to develop his idea. From 1961 through 1964, Leonard Kleinrock, while working on a Ph.D. thesis at MIT, and later while working at the University of California at Los Angeles (UCLA), developed the concept of packet switching, which is the basis for Internet communications today. In 1965 while at MIT, Lawrence Roberts and Thomas Merrill used Kleinrock's packet switching theory to successfully connect a computer in Massachusetts with a computer in California over dial-up telephone lines—the first Wide-Area Network (WAN).

In 1966, Roberts started working at DARPA on plans for the first large-scale computer network, called ARPANET, at which time he became aware of work done by Donald Davies and Roger Scantlebury of National Physical Laboratory (NPL) and Paul Baran of RAND Corporation that coincided with the packet switching concept developed by Kleinrock at MIT. By coincidence, the early work of the three groups (MIT, NPL, and RAND) had proceeded in parallel without any knowledge of each other. The word “packet” was actually adopted for the ARPANET proposal from the work at NPL. DARPA awarded the contract for bringing ARPANET online to BBN Technologies of Massachusetts. Bob Kahn headed the work at BBN, which, in 1969, brought ARPANET (later called the Internet, in 1974) online at 50 kilobits per second (Kbps), connecting four major computers at universities in the southwestern United States—UCLA, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah.

ARPANET quickly grew as more sites were connected. In 1970, the first host-to-host protocol for ARPANET was developed, called Network Control Protocol (NCP). In 1972, Ray Tomlinson of BBN developed email for ARPANET. In 1973, Vinton Cerf of Stanford and Bob Kahn of DARPA began to develop a replacement for NCP, which was later called Transmission Control Protocol/Internet Protocol (TCP/IP). ARPANET was transitioned to using TCP/IP by 1983. TCP/IP is still used today as the Internet's underlying protocol for connecting computers and transmitting data between them over the network.

The original Internet was not very user-friendly, so only researchers and scientists used it at that time. In 1991, the University of Minnesota developed the first user-friendly interface to the Internet, called Gopher. Gopher became popular because it allowed non-computer scientist types to easily use the Internet. Earlier, in 1989, Tim Berners-Lee and others at the European Laboratory for Particle Physics (CERN) in Switzerland proposed a new protocol for information distribution on the Internet, which was based on hypertext, a system of embedding links in text to link to other text. This system was invented before Gopher but took longer to develop. Berners-Lee eventually created the Hypertext Transfer Protocol (HTTP)1 and the Hypertext Markup Language (HTML),2 coined the term “World Wide Web,” developed the first web browser and web server, and went on to help found the World Wide Web Consortium (W3C),3 which is a large umbrella organization that currently manages the development of HTTP, HTML, and other web technologies.

1.2 Basic Web Architecture

 

Most traffic on the Internet today is the transmission of HTTP messages. Most Internet users have applications on their computers called web browsers (typically Microsoft Internet Explorer, Firefox, Opera, or Safari). The web browser is a user interface that knows how to send HTTP messages to, and receive HTTP messages from, a remote web server. The web browser establishes a TCP/IP connection with the web server and sends it an HTTP request message. The web server knows how to handle HTTP request messages to get data (text, images, movies, etc.) from the server and send it back to the web browser, or process data that is submitted to the web server from the web browser (e.g., a username and password required for login). Internet users typically use web browsers to simply get web pages from the web server in the form of HTML documents (see Figure 1.2.1). The web browser knows how to process the HTML document that it receives from the web server and display the results to the user via a graphical interface. Once the web browser receives the HTTP response message from the web server, the TCP/IP connection between the web browser and web server is closed.

images

Figure 1.2.1

Typical Interaction Between Web Browser and Web Server

1.2.1 Uniform Resource Identifier (URI)/Uniform Resource Locator (URL)

Web browsers always initiate TCP/IP connections with the web server, never vice versa. The web browser identifies which web server to make a connection with and what is being requested of the web server with a Uniform Resource Locator (URL). A URL is a classification of Uniform Resource Identifier (URI) that identifies a resource by its location. A URI is a more general term that encompasses all types of web identifier schemes. The terms “URL” and “URI” are often used interchangeably, but the term “URL” is meant to specify a type of URI that identifies the location of a resource, as opposed to, say, identifying a resource by name, independent of location, as is done with a Uniform Resource Name (URN).

A URI is simply the address that you type into the address field of your browser, such as http://www.w3c.org. URIs are composed of several parts—scheme, authority, path, query, and fragment. The following diagram illustrates the division of the parts of a URI.

images

images scheme—Identifies the application-level protocol. Examples are http, ftp, news, mailto, file, telnet. The :// after the scheme separates the scheme from the authority.

images authority—The host name or IP address of the web server and an optional port number. The standard port for HTTP is 80, which most computers already know, so it can typically be omitted. However, if the web server is listening for connections to a different port, such as 8080, then that port will need to be specified.

images path—A directory path to the resource. The concept of directory used here is the same as that used with file systems. The ? (question mark) after the path is used to separate the query from the rest of the URI and is not necessary if there is not a query.

images query—The optional query is information that is to be interpreted by the web server. It is used to provide additional information that is not included in the path or to submit text data to the web server. The query can contain multiple name=value pairs separated by an & (ampersand). Each name is separated from its associated value by an = (equal sign).

images fragment—The optional fragment is used to identify a location within a document. This part is actually used by the web browser, not the web server, to bring you to a specific location in a document. The # (pound) is used to separate the fragment from the rest of the URI.

1.2.2 Hypertext Transfer Protocol (HTTP)

Hypertext Transfer Protocol (HTTP) is a stateless protocol that supports requests followed by responses (request-response message exchange pattern). Previously we described the use of HTTP between a web browser and a web server; however, HTTP messages are also commonly exchanged between web servers or other applications that do not require human interaction. HTTP does not require the use of a web browser; it simply describes how data can be exchanged over a network that uses TCP/IP (e.g., the Internet). By default, HTTP uses TCP/IP connections on port 80 of a computer, but other ports can be, and often are, used. An HTTP transaction begins with a request from the client and ends with a response from the server.

An HTTP request message consists of three parts: (1) a line defining the HTTP method, the URI requested, and HTTP version used; (2) a list of HTTP request headers; and (3) the entity body. An example HTTP request message follows.

images

The [CRLF] tags in the preceding message represent the carriage return/linefeed (CRLF) characters. You normally would not see them; however, they are significant in an HTTP message, so they are displayed here. CRLF characters are used to separate each line of the header and the header from the entity body. The message header includes every line before the first blank line (the line in the example with only a CRLF). The first blank line defines where the message header ends and the entity body begins.

Each line of the HTTP request message that occurs after the first line and before the blank line is called an HTTP request header. HTTP request headers contain useful information about the client environment and the entity body, such as the type of web browser used, languages that the browser is configured for, and length of the entity body. The first line of the HTTP request message contains the HTTP method (GET), the URI (/catalog/ prices), and the protocol/version (HTTP/1.1). The HTTP method tells the web server something about how the message is structured and what the client expects the web server to do. The latest version of HTTP is version 1.1. The HTTP 1.1 specification defines the methods in Table 1.2.1. The GET and POST methods are the most widely used.

An HTTP response message also contains three parts, like the request message: (1) a line defining the version of the protocol used, a status code to identify if the request was successful, and a description; (2) a list of HTTP response headers; and (3) the entity body. An example HTTP response message follows.

images

Table 1.2.1 HTTP 1.1 Request Methods

  Method Description
  GET Simply retrieves the data identified by the URL.
  HEAD Like GET, but retrieves only the HTTP headers.
  POST Used to submit data to the web server in the entity body. Sometimes data is also submitted to a web server by adding a query string to the URL; however, this is not how the GET method was intended to work. A query string added to a URL for a GET is only supposed to help identify the data to be retrieved from the web server and sent in an HTTP response message back to the client.The POST method is typically used with HTML forms.
 OPTIONS Used to query a web server about the capabilities it provides.
  PUT Stores the entity body at the location specified by the URL.
  DELETE Deletes a document from the web server that is identified by the URL.
  TRACE Used to trace the path of a request through firewalls and proxy servers for debugging network problems.

As with the HTTP request message example, this example shows the CRLF characters, even though they would normally not be visible. The message header is separated from the entity body by a blank line. Every line after the first line and before the first blank line is called an HTTP response header. The HTTP response headers contain useful information such as the length and type of data in the entity body, the type of server that processed the request, and information that can be used by the web browser to determine how long it can cache the data. The entity body of the message may contain both text and binary data. In this case, it contains HTML code, which will be processed by the web browser and displayed to the user. The status code in this example is 200 and the description is OK. This output indicates to the client that the request was successful. The HTTP success codes are in the 200s, HTTP redirect codes are in the 300s, and HTTP error codes are in the 400s and 500s. An HTTP redirect occurs when the web server responds with an indication that the web browser should take some action, typically to request a different URL. Some common HTTP error codes and descriptions that you may have seen displayed by your web browser when surfing the web are 404 Not found (the resource requested was not found) and 500 Internal Error(the web server encountered an error).

1.3 Evolution of the Web

 

The Web was first implemented as a way for scientists to easily exchange documents and link references to other documents. The web pages were static and plain text—no fancy graphics or fonts, nothing moved, nothing flashed, no rich user interaction. As computers and networks became more powerful and the web user community grew, businesses saw the potential in the ability to distribute information about products and services to the world. As a result, the HTML specification was improved to meet the demands for richer content, such as images and animations. Web browsers, in particular Netscape Navigator and later Microsoft Internet Explorer, drove much of the HTML evolution by implementing new features before they became adopted as standards.

The first popular web browser, called Mosaic, was developed by the National Center for Supercomputing Applications (NCSA) in late 1992. Mosaic was a significant step forward because it improved the user interface to the Web and included support for images. Next, in 1994, Netscape Communications Corporation released Netscape 1, which was based on Mosaic but was much improved, with support for multiple TCP/IP connections, cookies, and a tag for centering content that is now deprecated: the <center> tag. Netscape became the new market leader and remained so for several years.

In late 1994, Sun changed the view of the Web with a Java technology-based Mosaic clone called WebRunner. WebRunner did something that had never been done before: it brought to life animated, moving objects and dynamic executable content inside the web browser. People no longer thought of the Web as being limited to static text content. In 1995, Netscape agreed to incorporate Java support into its next browser, which was Netscape 2 released in 1996. The Java support allowed developers to create small Java programs that were embedded in a web page and executed in the browser. Java Applets, as they are called, are still used today to provide 3D graphics and animation not natively supported by browsers (see Figure 1.3.1). The widespread adoption of Java applets has been hindered mostly because Java must be installed on the client computer in addition to the browser.

images

Figure 1.3.1

Example Java Applet for Viewing and Rotating a Molecule in 3D

Along with support for the Java programming language, Netscape 2 also included an interpreter for a scripting language called JavaScript4 (originally called Mocha, then LiveWire, then LiveScript, and finally JavaScript). The first version of JavaScript allowed the developer only to modify the contents of HTML forms; however, that was a huge step forward. Finally, developers could do some native processing in the browser, such as validation of form input, instead of having to make the user wait while the data was sent to the server and the response loaded in the browser. At that time most users were connected to the Internet via 28.8 Kbps modems—much slower than today's high-speed connections.

Another popular feature that Netscape 2 introduced was frames. The <frameset> tag allows the browser window to be divided into subwindows (multiple <frame> tags) that can each load its own web page. Developers typically used frames to reduce the amount of data that had to be downloaded from the server as a user surfed through a website. One or more small frames would be loaded with the parts of a view that did not change from page to page, such as the menu, and a main frame would contain the content that did change.

Soon, developers also realized that they could hide or minimize the size of a frame, and the hidden frame technique for client-server communication was born. The hidden frame is loaded with a web page that contains a form, and JavaScript is used to dynamically fill out the form and submit it to the server. This back-channel communication became popular, especially when the <iframe> tag was standardized in HTML 4.0. The IFrame allowed developers to embed a hidden frame in a typical web page that did not use a <frameset>.

The next major step in the evolution of web page user interaction came when Dynamic HTML (DHTML) was introduced with Netscape 4 and Internet Explorer 4 in 1996. Until that time, developers could not alter the content of the web page. Netscape 3 made the location of images editable, which allowed developers to change an image when the user moused over it, but almost none of a web page could be dynamically modified until the advent of DHTML. DHTML gave the developer the ability to alter most parts of a page by using JavaScript. Developers quickly learned to combine the hidden frame technique with DHTML so that any part of the page could be refreshed with content from the server, and a new age of web page user interaction was ushered in.

1.4 The Age of Ajax

 

Internet Explorer did a better job of implementing DHTML than did Netscape. That combined with the fact that Internet Explorer was free and shipped with the Microsoft Windows operating system had Internet Explorer crushing the competition by the time version 5 was released in 1999. During this time, the W3C greatly expanded and standardized the features introduced with DHTML, calling their specification the Document Object Model (DOM). The DOM and its partner specification, Cascading Style Sheets (CSS), were developed in hopes that the various browser vendors would implement them to make the work of a developer easier. Without standards, developers had to add confusing conditional logic to their scripts to support the various proprietary implementations of browsers that people used. Internet Explorer 5, and later version 6, had better support for the W3C standards, but it was far from ideal.

Because of the crushing competition and the lack of revenue, Netscape decided to open-source its code and called on developers around the word to help them create a better browser. As a result, the Mozilla Project was formed. Mozilla decided to rewrite the browser from scratch to have the best support for the W3C standards that were now in place. It took the Mozilla Project nearly 4 years to create the first full release of their new browser, Mozilla 1.0, in 2002. With a lack of browser competition, not much advancement was made in web page user interaction during this period—that is, until Google decided to get into the online map business.

On February 18, 2005, Jesse James Garrett of Adaptive Path5 published an online article entitled “Ajax: A New Approach to Web Applications.”6 He originally coined the term from an acronym of Asynchronous JavaScript And XML. In Garrett's article, he discussed how the user experience of web applications is approaching that of desktop applications thanks to a new combination of technologies. Although the combination of the technologies was new, the technologies themselves had been available for several years. The technologies are HTML, CSS, DOM, JavaScript, eXtensible Markup Language (XML), and a JavaScript object called XMLHttpRequest. The key technology in this stack is XMLHttpRequest, which was originally introduced in Internet Explorer 5 long before Garrett's article. By the time Garrett wrote his article, XMLHttpRequest was supported by all the major browsers, but until that time it was not very popular and consequently received little attention. So why is XMLHttpRequest so special? Because it allows a background asynchronous request to be made from JavaScript. The request is made without affecting the page that the user is viewing and without locking the user interface. Figure 1.4.1 illustrates the difference between applications that use Ajax and those that do not.

In a traditional web application, the interaction that happens after a user clicks a link or a button is as follows: (1) the browser makes an HTTP request to the web server, (2) the web server typically queries data from a database, (3) the web server performs some calculations and possibly communicates with another system, and (4) the web server responds to the browser with an entirely new HTML page. This round trip is time consuming, and the loading of a full new page in the browser is usually unnecessary because typically only a portion of the page needs to be updated. In addition, because of the design of web browsers, the user must wait the entire time while the new page is being requested from the server, the server is doing its processing, and the browser is loading the new page. This design was just fine when all that people used the Web for was reading text documents and linking to other documents to read. But when it came to using the Web as a platform for applications, this design was clearly lacking.

images

Figure 1.4.1

Traditional vs. Ajax Web Applications

In contrast, the interaction that happens in an Ajax Web application after the user clicks a link or button is as follows: (1) a JavaScript call is made to an “Ajax engine,” which is simply JavaScript code that handles asynchronous communication with the server; (2) the Ajax engine makes a background asynchronous request to the web server; (3) the web server typically queries data from a database; (4) the web server performs some calculations and possibly communicates with another system; (5) instead of responding to the browser with a full new page, the web server sends back only the data that it needs; and (6) the JavaScript code updates the user interface with the new data. In the traditional web application model, the web server must respond with an entire HTML page, but in the Ajax model it can respond with just the necessary data: HTML snippets, XML, plain text—whatever. The Ajax engine processes this data and uses it to update various pieces of the page. Also, in the traditional model the user must wait while the request is being processed, but in the Ajax model the request is handled in a separate thread so that the user can continue using the page. No more click and wait.

images

Figure 1.4.2

Google Suggest and Google Maps

So what does Google have to do with this? Well, in Garrett's article he mentioned Google Maps,7 Google Suggest,8 and Gmail9 as examples of this new technique. Gmail is an online email application that uses Ajax to do things such as automatically save a draft of a letter that you are writing before you send it. Google Suggest adds a simple feature to Google search that automatically lists suggested terms as you type, almost instantly. Google Maps is the most complex of these Ajax examples. It allows you to find points on a map, and pan and zoom the map by using your mouse, all very fluidly, and all from the same original page (see Figure 1.4.2). Google was not the only company that had started using Ajax techniques at the time, but it was one of the most prominent. Plus, Google was not known for flashy websites. Google's fame arose from a website consisting of a plain white page with a text box and a button in the middle. However, Google's use of Ajax proved that the technique was not only feasible but also suitable for high-volume, professional websites. The combination of Garrett's new, catchy name and prominent examples from a major company ignited the fire. Ajax became an overnight sensation.

images

Figure 1.4.3

Google Docs Online Word Processor

Now Ajax is associated with another catch phrase, Web 2.0. In October 2004 O'Reilly Media held a conference entitled Web 2.0,10 and since then the phrase has caught on. There is much confusion over what Web 2.0 is and, for that matter, what Web 1.0 is. Part of what O'Reilly defines as Web2.0 is the shift from the Web as a way to bring content to desktop applications to a platform from which to deliver full-scale applications that are fluid enough to in many ways supplant desktop applications. This shift is particularly made possible by Ajax. Google's Ajax applications are, to many people, the line between the Netscape-dominated Web 1.0 and the Google-dominated Web 2.0—the difference between selling packaged software and selling services. A great example of web applications supplanting desktop applications is Google Docs,11 which is an online tool for creating documents, spreadsheets, and presentations. Instead of using Microsoft Word, Excel, and PowerPoint, you can use Google Docs. You get a tool that can not only create these documents but also import existing Microsoft documents and readily share your documents online with others (see Figure 1.4.3).

As time passes, we will probably see more applications transition from the desktop to the Web. It is an exciting time, and the possibilities are seemingly endless.

1.5 Summary

So now you know what Ajax is. It is not a specific technology but rather a technique for using existing technologies to improve the user interaction of web pages. It is essentially a way of communicating with the web server without refreshing the page that the user is viewing. Jesse James Garrett originally described Ajax as a technique that used a specific set of technologies, including XML and XMLHttpRequest. However, with our definition, many techniques could be placed under the Ajax umbrella. To start with, the use of XML with Ajax is not required. The data that you transfer between the browser and server can be just about anything you want. Second, the use of the XMLHttpRequest object to send the background request is not required because you can use an IFrame.

Beyond the standard web technologies, there are others that are often used to provide “Rich Internet Applications” (a term coined by Macromedia). For example, we already mentioned how you can embed a Java applet in a web page. You can use that applet for the entire user interface, or you can use it just to handle the communication with the server in place of XMLHttpRequest. Another popular technology that also requires an installation in addition to the web browser is Adobe Flash.12 Flash can provide a rich user interaction, but the main drawback is that the tools are expensive. Because you can create an Ajax-like experience without using XML or XMLHttpRequest, Garrett modified his original article to drop “AJAX” as an acronym. Now “Ajax” is just considered a term that identifies the technique.

This book is about teaching you how to develop Ajax Web applications by using standard web technologies, HTML, CSS, XML, and JavaScript (for Java Applets or Flash you will have to look somewhere else). By the time you finish this book, you should have the knowledge that you need to create your own Google Maps-like application. But before you continue with this book, take some time and surf the Internet to experience the Ajax applications mentioned in this chapter and any others that you can find. Doing so will give you a better understanding of what you are trying to learn.

1.6 Self-Review Questions

1. HTTP protocol works on top of the TCP/IP protocol.

a. True

b. False

2. HTTP is a stateless protocol that supports

a. HTTP requests only

b. HTTP response only

c. Both of the above

3. A URN is a classification of Uniform Resource Identifier (URI) that identifies a resource by its location.

a. True

b. False

4. A URL is a classification of Uniform Resource Identifier (URI) that identifies a resource by its name.

a. True

b. False

5. The POST method is typically used for an HTTP form request.

a. True

b. False

6. Which two methods are most often used in an HTTP request?

a. GET and POST

b. GET and PUT

c.0HEAD and GET

d. HEAD and DELETE

7. HTTP is the only protocol used in any web application.

a. True

b. False

Keys to the Self-Review Questions

1.a 2.c 3.b 4.b 5.a 6.a 7.b

1.7 Exercises

1. What was Ajax originally an acronym for?

2. What is Ajax?

3. What are the technologies typically used in Ajax, and what is each used for?

4. Using standard web technologies, what are two different ways that you can communicate with the server without refreshing the page?

5. What are the steps involved in typical Ajax Web application user interaction? How are they different from a traditional web application?

6. Search the Web and list five other sites that use Ajax that are not mentioned in this chapter.

1.8 References

Asleson, Ryan, and Nathaniel T. Schutta. 2005. Foundations of Ajax. Berkeley, CA: Apress.

Berners-Lee, T., R. Fielding, and L. Masinter. Uniform Resource Identifier (URI): Generic Syntax. Network Working Group, 2005. http://www.gbiv.com/protocols/uri/rfc/rfc3986.html (accessed February 28. 2008).

Boutell.com, Inc. “The New WWW FAQs.” Boutell.com. 2005. http://www.boutell.com/newfaq/ (accessed February 28, 2008).

Byous, Jon. “Java Technology: The Early Years.” Sun Microsystems, Inc. 2008. http://java.sun.com/features/1998/05/birthday.html (accessed February 28, 2008).

Fielding R., J. Gettys, J. Mogul, H. Frystyk, L. Masinter, P. Leach, and T. Berners-Lee. Hypertext Transfer Protocol—HTTP/1.1. The Internet Society, 1999. http://www.ietf.org/rfc/rfc2616.txt (accessed February 28, 2008).

Garrett, Jesse James. “Ajax: A New Approach to Web Applications.” Adaptive Path, LLC. 2008. https://www.adaptivepath.com/ideas/essays/archives/000385.php (accessed February 28, 2008).

Gehtland, Justin, Ben Galbraith, and Dion Almaer. 2006. Pragmatic Ajax: A Web 2.0 Primer. Raleigh, NC: Pragmatic Bookshelf.

Howe, Walt. “A Brief History of the Internet.” Walt Howe. 2008. http://www.walthowe.com/navnet/history.html (accessed February 28, 2008).

Keith, Jeremy. 2007. Bulletproof Ajax. Berkeley, CA: New Riders.

Koch, Peter-Paul. “A history of browsers.” Peter-Paul Koch. 2008. http://www.quirksmode.org/browsers/history.html (accessed February 28, 2008).

Lenier, Barry M., Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leonard Kleinrock, Daniel C. Lynch, Jon Postel, Larry G. Roberts, and Stephen Wolff. A Brief History of the Internet, version 3.32. Internet Society. http://www.isoc.org/internet/history/brief.shtml (accessed February 28, 2008).

Mahemoff, Michael. 2006. Ajax Design Patterns. Sebastopol, CA: O'Reilly Media.

Zakas, Nicholas C., Jeremy McPeak, and Joe Fawcett. 2007. Professional Ajax, 2nd Edition. Indianapolis, IN: Wiley Publishing.


1 The HTTP specification can be found at http://www.ietf.org/rfc/rfc2616.txt. The HTTP specification is maintained by multiple groups, including the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). See also http://www.ietf.org/rfc/rfc2616.txt. The HTTP specification is maintained by multiple groups, including the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C). See also http://www.w3.org/Protocols/.

2 The HTML specification can be found at http://www.w3.org/TR/html4/. The World Wide Web Consortium (see footnote 3) is the organization that maintains the HTML specification.

3 W3C is an international consortium of organizations devoted to leading the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. Their website is located at http://www.w3.org.

4 JavaScript is a completely different programming language from Java.

5 http://adaptivepath.com/

6 https://www.adaptivepath.com/ideas/essays/archives/000385.php

7 http://maps.google.com

8 http://www.google.com/webhp?complete=1&hl=en

9 http://gmail.com

10 http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html

11 http://docs.google.com

12 http://www.adobe.com/products/flash/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset