20
JavaScript APIs

WHAT'S IN THIS CHAPTER?

  • Atomics and SharedArrayBuffer
  • Cross-context messaging
  • Encoding API
  • File and Blob API
  • Drag and drop
  • Notifications API
  • Page Visibility API
  • Streams API
  • Timing APIs
  • Web components
  • Web Cryptography API

The increasing versatility of web browsers is accompanied by a dizzying increase in complexity. In many ways, the modern web browser has become a Swiss army knife of different APIs detailed in a broad collection of specifications. This browser specification ecosystem is messy and volatile. Some specifications like HTML5 are a bundle of APIs and browser features that enhance an existing standard. Other specifications define an API for a single feature, such as the Web Cryptography API or the Notifications API. Depending on the browser, adoption of these newer APIs can sometimes be partial or nonexistent.

Ultimately, the decision to use newer APIs involves a tradeoff between supporting more browsers and enabling more modern features. Some APIs can be emulated using a polyfill, but polyfills can often incur a performance hit or bloat your site's JS payloads.

ATOMICS AND SharedArrayBuffer

When a SharedArrayBuffer is accessed by multiple contexts, race conditions can occur when operations on the buffer are performed simultaneously. The Atomics API allows multiple contexts to safely read and write to a single SharedArrayBuffer by forcing buffer operations to occur only one at a time. The Atomics API was defined in the ES2017 specification.

You will notice that the Atomics API in many ways resembles a stripped-down instruction set architecture (ISA)—this is no accident. The nature of atomic operations precludes some optimizations that the operating system or computer hardware would normally perform automatically (such as instruction reordering). Atomic operations also make concurrent memory access impossible, which obviously can slow program execution when improperly applied. Therefore, the Atomics API was designed to enable sophisticated multithreaded JavaScript programs to be architected out of a minimal yet robust collection of atomic behaviors.

SharedArrayBuffer

A SharedArrayBuffer features an identical API to an ArrayBuffer. The primary difference is that, whereas a reference to an ArrayBuffer must be handed off between execution contexts, a reference to a SharedArrayBuffer can be used simultaneously by any number of execution contexts.

Sharing memory between multiple execution contexts means that concurrent thread operations become a possibility. Traditional JavaScript operations offer no protection from race conditions resulting from concurrent memory access. The following example demonstrates a race condition between four dedicated workers accessing the same SharedArrayBuffer:

const workerScript = `
self.onmessage = ({data}) => {
 const view = new Uint32Array(data);

 // Perform 1000000 add operations
 for (let i = 0; i < 1E6; ++i) {
  // Thread-unsafe add operation introduces race condition
  view[0] += 1;
 }

 self.postMessage(null);
};
`;

const workerScriptBlobUrl = URL.createObjectURL(new Blob([workerScript]));

// Create worker pool of size 4
const workers = [];
for (let i = 0; i < 4; ++i) {
 workers.push(new Worker(workerScriptBlobUrl));
}

// Log the final value after the last worker completes
let responseCount = 0;
for (const worker of workers) {
 worker.onmessage = () => {
  if (++responseCount == workers.length) {
   console.log(`Final buffer value: ${view[0]}`);
  }
 };
}

// Initialize the SharedArrayBuffer
const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);
view[0] = 1;

// Send the SharedArrayBuffer to each worker
for (const worker of workers) {
 worker.postMessage(sharedArrayBuffer);
}

// (Expected result is 4000001. Actual output will be something like:)
// Final buffer value: 2145106

To address this problem, the Atomics API was introduced to allow for thread-safe JavaScript operations on a SharedArrayBuffer.

Atomics Basics

The Atomics object exists on all global contexts, and it exposes a suite of static methods for performing thread-safe operations. Most of these methods take a TypedArray instance (referencing a SharedArrayBuffer) as the first argument and the relevant operands as subsequent arguments.

Atomic Arithmetic and Bitwise Methods

The Atomics API offers a simple suite of methods for performing an in-place modification. In the ECMA specification, these methods are defined as AtomicReadModifyWrite operations. Under the hood, each of these methods is performing a read from a location in the SharedArrayBuffer, an arithmetic or bitwise operation, and a write to the same location. The atomic nature of these operators means that these three operations will be performed in sequence and without interruption by another thread.

All the arithmetic methods are demonstrated here:

// Create buffer of size 1
let sharedArrayBuffer = new SharedArrayBuffer(1);

// Create Uint8Array from buffer
let typedArray = new Uint8Array(sharedArrayBuffer);

// All ArrayBuffers are initialized to 0
console.log(typedArray); // Uint8Array[0]

const index = 0;
const increment = 5;

// Atomic add 5 to value at index 0
Atomics.add(typedArray, index, increment);

console.log(typedArray); // Uint8Array[5]

// Atomic subtract 5 to value at index 0
Atomics.sub(typedArray, index, increment);

console.log(typedArray); // Uint8Array[0]

All the bitwise methods are demonstrated here:

// Create buffer of size 1
let sharedArrayBuffer = new SharedArrayBuffer(1);

// Create Uint8Array from buffer
let typedArray = new Uint8Array(sharedArrayBuffer);

// All ArrayBuffers are initialized to 0
console.log(typedArray); // Uint8Array[0]

const index = 0;

// Atomic or 0b1111 to value at index 0
Atomics.or(typedArray, index, 0b1111);

console.log(typedArray); // Uint8Array[15]

// Atomic and 0b1100 to value at index 0
Atomics.and(typedArray, index, 0b1100);

console.log(typedArray); // Uint8Array[12]

// Atomic xor 0b1111 to value at index 0
Atomics.xor(typedArray, index, 0b1111);

console.log(typedArray); // Uint8Array[3]

The thread-unsafe example from earlier can be corrected as follows:

const workerScript = `
self.onmessage = ({data}) => {
 const view = new Uint32Array(data);

 // Perform 1000000 add operations
 for (let i = 0; i < 1E6; ++i) {
  // Thread-safe add operation
  Atomics.add(view, 0, 1);
 }

 self.postMessage(null);
};
`;

const workerScriptBlobUrl = URL.createObjectURL(new Blob([workerScript]));

// Create worker pool of size 4
const workers = [];
for (let i = 0; i < 4; ++i) {
 workers.push(new Worker(workerScriptBlobUrl));
}

// Log the final value after the last worker completes
let responseCount = 0;
for (const worker of workers) {
 worker.onmessage = () => {
  if (++responseCount == workers.length) {
   console.log(`Final buffer value: ${view[0]}`);
  }
 };
}

// Initialize the SharedArrayBuffer
const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);
view[0] = 1;

// Send the SharedArrayBuffer to each worker
for (const worker of workers) {
 worker.postMessage(sharedArrayBuffer);
}

// (Expected result is 4000001)
// Final buffer value: 4000001

Atomic Reads and Writes

Both the browser's JavaScript compiler and the CPU architecture itself are given license to reorder instructions if they detect it will increase the overall throughput of program execution. Normally, the single-threaded nature of JavaScript means this optimization should be welcomed with open arms. However, instruction reordering across multiple threads can yield race conditions that are extremely difficult to debug.

The Atomics API addresses this problem in two primary ways:

  • All Atomics instructions are never reordered with respect to one another.
  • Using an Atomic read or Atomic write guarantees that all instructions (both Atomic and non-Atomic) will never be reordered with respect to that Atomic read/write. This means that all instructions before an Atomic read/write will finish before the Atomic read/write occurs, and all instructions after the Atomic read/write will not begin until the Atomic read/write completes.

In addition to reading and writing values to a buffer, Atomics.load() and Atomics.store() behave as “code fences.” The JavaScript engine guarantees that, although non-Atomic instructions may be locally reordered relative to a load() or store(), the reordering will never violate the Atomic read/write boundary. The following code annotates this behavior:

const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);

// Perform non-Atomic write
view[0] = 1;

// Non-Atomic write is guaranteed to occur before this read,
// so this is guaranteed to read 1
console.log(Atomics.load(view, 0)); // 1

// Perform Atomic write
Atomics.store(view, 0, 2);

// Non-Atomic read is guaranteed to occur after Atomic write,
// so this is guaranteed to read 2
console.log(view[0]); // 2

Atomic Exchanges

The Atomics API offers two types of methods that guarantee a sequential and uninterrupted read-then-write: exchange() and compareExchange(). Atomics.exchange() performs a simple swap, guaranteeing that no other threads will interrupt the value swap:

const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);

// Write 3 to 0-index
Atomics.store(view, 0, 3);

// Read value out of 0-index and then write 4 to 0-index
console.log(Atomics.exchange(view, 0, 4)); // 3

// Read value at 0-index
console.log(Atomics.load(view, 0));        // 4

One thread in a multithreaded program might want to perform a write to a shared buffer only if another thread has not modified a specific value since it was last read. If the value has not changed, it can safely write the update value. If the value has changed, performing a write would trample the value calculated by another thread. For this task, the Atomics API features the compareExchange() method. This method only performs a write if the value at the intended index matches an expected value. Consider the following example:

const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);

// Write 5 to 0-index
Atomics.store(view, 0, 5);
// Read the value out of the buffer
let initial = Atomics.load(view, 0);

// Perform a non-atomic operation on that value
let result = initial ** 2;

// Write that value back into the buffer only if the buffer has not changed
Atomics.compareExchange(view, 0, initial, result);

// Check that the write succeeded
console.log(Atomics.load(view, 0)); // 25

If the value does not match, the compareExchange() call will simply behave as a passthrough:

const sharedArrayBuffer = new SharedArrayBuffer(4);
const view = new Uint32Array(sharedArrayBuffer);

// Write 5 to 0-index
Atomics.store(view, 0, 5);
// Read the value out of the buffer
let initial = Atomics.load(view, 0);

// Perform a non-atomic operation on that value
let result = initial ** 2;

// Write that value back into the buffer only if the buffer has not changed
Atomics.compareExchange(view, 0, -1, result);

// Check that the write failed
console.log(Atomics.load(view, 0)); // 5

Atomics Futex Operations and Locks

Multithreaded programs wouldn't amount to much without some sort of locking construct. To address this need, the Atomics API offers several methods modeled on the Linux futex (a portmanteau of fast user-space mutex). The methods are fairly rudimentary, but they are intended to be used as a fundamental building block for more elaborate locking constructs.

Atomics.wait() and Atomics.notify() are best understood by example. The following rudimentary example spawns four workers to operate on an Int32Array of length 1. The spawned workers will take turns obtaining the lock and performing their add operation:

const workerScript = `
self.onmessage = ({data}) => {
 const view = new Int32Array(data);

 console.log('Waiting to obtain lock');

 // Halt when encountering the initial value, timeout at 10000ms
 Atomics.wait(view, 0, 0, 1E5);
 
 console.log('Obtained lock');

 // Add 1 to data index
 Atomics.add(view, 0, 1);
  
 console.log('Releasing lock');
  
 // Allow exactly one worker to continue execution
 Atomics.notify(view, 0, 1);
  
 self.postMessage(null);
};
`;

const workerScriptBlobUrl = URL.createObjectURL(new Blob([workerScript]));

const workers = [];
for (let i = 0; i < 4; ++i) {
 workers.push(new Worker(workerScriptBlobUrl));
}

// Log the final value after the last worker completes
let responseCount = 0;
for (const worker of workers) {
 worker.onmessage = () => {
  if (++responseCount == workers.length) {
   console.log(`Final buffer value: ${view[0]}`);
  }
 };
}

// Initialize the SharedArrayBuffer
const sharedArrayBuffer = new SharedArrayBuffer(8);
const view = new Int32Array(sharedArrayBuffer);

// Send the SharedArrayBuffer to each worker
for (const worker of workers) {
 worker.postMessage(sharedArrayBuffer);
}

// Release first lock in 1000ms
setTimeout(() => Atomics.notify(view, 0, 1), 1000);

// Waiting to obtain lock
// Waiting to obtain lock
// Waiting to obtain lock
// Waiting to obtain lock
// Obtained lock
// Releasing lock
// Obtained lock
// Releasing lock
// Obtained lock
// Releasing lock
// Obtained lock
// Releasing lock
// Final buffer value: 4

Because the SharedArrayBuffer is initialized with 0s, each worker will arrive at the Atomics.wait() and halt execution. In the halted state, the thread of execution exists inside a wait queue, remaining paused until the specified timeout elapses or until Atomics.notify() is invoked for that index. After 1000 milliseconds, the top-level execution context will call Atomics.notify() to release exactly one of the waiting threads. This thread will finish execution and call Atomics.notify() once again, releasing yet another thread. This continues until all the threads have completed execution and transmitted their final postMessage().

The Atomics API also features the Atomics.isLockFree() method. It is almost certain that you will never need to use this method, as it is designed for high-performance algorithms to decide whether or not obtaining a lock is necessary. The specification offers this description:

Atomics.isLockFree() is an optimization primitive. The intuition is that if the atomic step of an atomic primitive (compareExchange, load, store, add, sub, and, or, xor, or exchange) on a datum of size n bytes will be performed without the calling agent acquiring a lock outside the n bytes comprising the datum, then Atomics.isLockFree(n) will return true. High-performance algorithms will use Atomics.isLockFree to determine whether to use locks or atomic operations in critical sections. If an atomic primitive is not lock-free then it is often more efficient for an algorithm to provide its own locking.

Atomics.isLockFree(4) always returns true as that can be supported on all known relevant hardware. Being able to assume this will generally simplify programs.

CROSS-CONTEXT MESSAGING

Cross-document messaging, sometimes abbreviated as XDM, is the capability to pass information between different execution contexts, such as web workers or pages from different origins. For example, a page on www.wrox.com wants to communicate with a page from p2p.wrox.com that is contained in an iframe. Prior to XDM, achieving this communication in a secure manner took a lot of work. XDM formalizes this functionality in a way that is both secure and easy to use.

At the heart of XDM is the postMessage() method. This method name is used in many parts of HTML5 in addition to XDM and is always used for the same purpose: to pass data into another location.

The postMessage() method accepts three arguments: a message, a string indicating the intended recipient origin, and an optional array of transferable objects (only relevant to web workers). The second argument is very important for security reasons and restricts where the browser will deliver the message. Consider this example:

let iframeWindow = document.getElementById("myframe").contentWindow;
iframeWindow.postMessage("A secret", "http://www.wrox.com");

The last line attempts to send a message into the iframe and specifies that the origin must be "http://www.wrox.com". If the origin matches, then the message will be delivered into the iframe; otherwise, postMessage() silently does nothing. This restriction protects your information should the location of the window change without your knowledge. It is possible to allow posting to any origin by passing in "*" as the second argument to postMessage(), but this is not recommended.

A message event is fired on window when an XDM message is received. This message is fired asynchronously so there may be a delay between the time at which the message was sent and the time at which the message event is fired in the receiving window. The event object that is passed to an onmessage event handler has three important pieces of information:

  • data—The string data that was passed as the first argument to postMessage().
  • origin—The origin of the document that sent the message, for example, "http://www.wrox.com".
  • source—A proxy for the window object of the document that sent the message. This proxy object is used primarily to execute the postMessage() method on the window that sent the last message. If the sending window has the same origin, this may be the actual window object.

It's very important when receiving a message to verify the origin of the sending window. Just as specifying the second argument to postMessage() ensures that data doesn't get passed unintentionally to an unknown page, checking the origin during onmessage ensures that the data being passed is coming from the right place. The basic pattern is as follows:

window.addEventListener("message", (event) => {
 // ensure the sender is expected
 if (event.origin == "http://www.wrox.com") {
  // do something with the data
  processMessage(event.data);
  // optional: send a message back to the original window
  event.source.postMessage("Received!", "http://p2p.wrox.com");
 }
});

Keep in mind that event.source is a proxy for a window in most cases, not the actual window object, so you can't access all of the window information. It's best to just use postMessage(), which is always present and always callable.

There are a few quirks with XDM. First, the first argument of postMessage() was initially implemented as always being a string. The definition of that first argument changed to allow any structured data to be passed in; however, not all browsers have implemented this change. For this reason, it's best to always pass a string using postMessage(). If you need to pass structured data, then the best approach is to call JSON.stringify() on the data, passing the string to postMessage(), and then call JSON.parse() in the onmessage event handler.

XDM is extremely useful when trying to sandbox content using an iframe to a different domain. This approach is frequently used in mashups and social networking applications. The containing page is able to keep itself secure against malicious content by only communicating into an embedded iframe via XDM. XDM can also be used with pages from the same domain.

ENCODING API

The Encoding API allows for converting between strings and typed arrays. The specification introduces four global classes for performing these conversions: TextEncoder, TextEncoderStream, TextDecoder, and TextDecoderStream.

Encoding Text

The Encoding API affords two ways of converting a string into its typed array binary equivalent: a bulk encoding, and a stream encoding. When going from string to typed array, the encoder will always use UTF-8.

Bulk Encoding

The bulk designation means that the JavaScript engine will synchronously encode the entire string. For very long strings, this can be a costly operation. Bulk encoding is accomplished using an instance of a TextEncoder:

const textEncoder = new TextEncoder();

This instance exposes an encode() method, which accepts a string and returns each character's UTF-8 encoding inside a freshly created Uint8Array:

const textEncoder = new TextEncoder();
const decodedText = 'foo';
const encodedText = textEncoder.encode(decodedText);

// f encoded in utf-8 is 0x66 (102 in decimal)
// o encoded in utf-8 is 0x6F (111 in decimal)
console.log(encodedText); // Uint8Array(3) [102, 111, 111]

The encoder is equipped to handle characters, which will take up multiple indices in the eventual array, such as emojis:

const textEncoder = new TextEncoder();
const decodedText = 'c20g001';
const encodedText = textEncoder.encode(decodedText);

// c20g001 encoded in UTF-8 is 0xF0 0x9F 0x98 0x8A (240, 159, 152, 138 in decimal)
console.log(encodedText); // Uint8Array(4) [240, 159, 152, 138]

The instance also exposes an encodeInto() method, which accepts a string and the destination Uint8Array. This method returns a dictionary containing read and written properties, indicating how many characters were successfully read from the source string and written to the destination array, respectively. If the typed array has insufficient space, the encoding will terminate early and the dictionary will indicate that result:

const textEncoder = new TextEncoder();
const fooArr = new Uint8Array(3); 
const barArr = new Uint8Array(2);
const fooResult = textEncoder.encodeInto('foo', fooArr); 
const barResult = textEncoder.encodeInto('bar', barArr);

console.log(fooArr);   // Uint8Array(3) [102, 111, 111] 
console.log(fooResult); // { read: 3, written: 3 }

console.log(barArr);   // Uint8Array(2) [98, 97] 
console.log(barResult); // { read: 2, written: 2 }

encode() must allocate a new Uint8Array, whereas encodeInto() does not. For performance-sensitive applications, this distinction may have significant implications.

Stream Encoding

A TextEncoderStream is merely a TextEncoder in the form of a TransformStream. Piping a decoded text stream through the stream encoder will yield a stream of encoded text chunks:

async function* chars() {
 const decodedText = 'foo';

 for (let char of decodedText) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, char));
 }
}

const decodedTextStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of chars()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});


const encodedTextStream = decodedTextStream.pipeThrough(new TextEncoderStream());

const readableStreamDefaultReader = encodedTextStream.getReader();

(async function() {
 while(true) {
  const { done, value } = await readableStreamDefaultReader.read();

  if (done) {
   break;
  } else {
   console.log(value);
  }
 }
})();

// Uint8Array[102]
// Uint8Array[111]
// Uint8Array[111] 

Decoding Text

The Encoding API affords two ways of converting a typed array into its string equivalent: a bulk decoding, and a stream decoding. Unlike the encoder classes, when going from typed array to string, the decoder supports a large number of string encodings, listed here: https://encoding.spec.whatwg.org/#names-and-labels

The default character encoding is UTF-8.

Bulk Decoding

The bulk designation means that the JavaScript engine will synchronously decode the entire string. For very long strings, this can be a costly operation. Bulk decoding is accomplished using an instance of a DecoderEncoder:

const textDecoder = new TextDecoder();

This instance exposes a decode() method, which accepts a typed array and returns the decoded string:

const textDecoder = new TextDecoder();

// f encoded in utf-8 is 0x66 (102 in decimal)
// o encoded in utf-8 is 0x6F (111 in decimal)
const encodedText = Uint8Array.of(102, 111, 111);
const decodedText = textDecoder.decode(encodedText);

console.log(decodedText); // foo

The decoder does not care which typed array it is passed, so it will dutifully decode the entire binary representation. In this example, 32-bit values only containing 8-bit characters are decoded as UTF-8, yielding extra empty characters:

const textDecoder = new TextDecoder();

// f encoded in utf-8 is 0x66 (102 in decimal)
// o encoded in utf-8 is 0x6F (111 in decimal)
const encodedText = Uint32Array.of(102, 111, 111);
const decodedText = textDecoder.decode(encodedText);

console.log(decodedText); // "f  o  o  "

The decoder is equipped to handle characters that span multiple indices in the typed array, such as emojis:

const textDecoder = new TextDecoder();

// c20g001 encoded in UTF-8 is 0xF0 0x9F 0x98 0x8A (240, 159, 152, 138 in decimal)
const encodedText = Uint8Array.of(240, 159, 152, 138);
const decodedText = textDecoder.decode(encodedText);

console.log(decodedText); // c20g001

Unlike TextEncoder, TextDecoder is compatible with a wide number of character encodings. Consider the following example, which uses UTF-16 encoding instead of the default UTF-8:

const textDecoder = new TextDecoder('utf-16');

// f encoded in utf-8 is 0x0066 (102 in decimal)
// o encoded in utf-8 is 0x006F (111 in decimal)
const encodedText = Uint16Array.of(102, 111, 111);
const decodedText = textDecoder.decode(encodedText);

console.log(decodedText); // foo

Stream Decoding

A TextDecoderStream is merely a TextDecoder in the form of a TransformStream. Piping an encoded text stream through the stream decoder will yield a stream of decoded text chunks:

async function* chars() {
 // Each chunk must exist as a typed array
 const encodedText = [102, 111, 111].map((x) => Uint8Array.of(x));

 for (let char of encodedText) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, char));
 }
}

const encodedTextStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of chars()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});


const decodedTextStream = encodedTextStream.pipeThrough(new TextDecoderStream());

const readableStreamDefaultReader = decodedTextStream.getReader();

(async function() {
 while(true) {
  const { done, value } = await readableStreamDefaultReader.read();

  if (done) {
   break;
  } else {
    console.log(value);
  }
 }
})();

// f
// o
// o 

Text decoder streams implicitly understand that surrogate pairs may be split between chunks. The decoder stream will retain any fragmented chunks until a complete character is formed. Consider the following example, where the stream decoder will wait for all four chunks to be passed through before the decoded stream emits the single character:

async function* chars() {
 // c20g001 encoded in UTF-8 is 0xF0 0x9F 0x98 0x8A (240, 159, 152, 138 in decimal)
 const encodedText = [240, 159, 152, 138].map((x) => Uint8Array.of(x));

 for (let char of encodedText) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, char));
 }
}

const encodedTextStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of chars()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});


const decodedTextStream = encodedTextStream.pipeThrough(new TextDecoderStream());

const readableStreamDefaultReader = decodedTextStream.getReader();

(async function() {
 while(true) {
  const { done, value } = await readableStreamDefaultReader.read();

  if (done) {
    break;
  } else {
    console.log(value);
  }
 }
})();

// c20g001

Text decoder streams will be most commonly used in conjunction with fetch(), as the response body can be processed as a ReadableStream. This might take the following form:

const response = await fetch(url);
const stream = response.body.pipeThrough(new TextDecoderStream());

for await (let decodedChunk of decodedStream) {
 console.log(decodedChunk);
}

BLOB AND FILE APIs

One of the major pain points of web applications has been the inability to interact with files on a user's computer. Since before 2000, the only way to deal with files was to place <input type="file"> into a form and leave it at that. The Blob and File APIs are designed to give web developers access to files on the client computer in a secure manner that allows for better interaction with those files.

The File Type

The File API is still based around the file input field of a form but adds the ability to access the file information directly. HTML5 adds a files collection to DOM for the file input element. When one or more files are selected in the field, the files collection contains a sequence of File objects that represent each file. Each File object has several read-only properties, including:

  • name—The file name on the local system.
  • size—The size of the file in bytes.
  • type—A string containing the MIME type of the file.
  • lastModifiedDate—A string representing the last time the file was modified. This property has been implemented only in Chrome.

For instance, you can retrieve information about each file selected by listening for the change event and then looking at the files collection:

let filesList = document.getElementById("files-list");
filesList.addEventListener("change", (event) => {
 let files = event.target.files,
   i = 0,
   len = files.length;
 
 while (i < len) {
  const f = files[i];
  console.log('${f.name} (${f.type}, ${f.size} bytes)`);
  i++;
 }
});

This example simply outputs the information about each file to the console. This capability alone is a big step forward for web applications, but the File API goes further by allowing you to actually read data from the files via the FileReader type.

The FileReader Type

The FileReader type represents an asynchronous file-reading mechanism. You can think of FileReader as similar to XMLHttpRequest, only it is used for reading files from the file system as opposed to reading data from the server. The FileReader type offers several methods to read in file data:

  • readAsText(file, encoding)—Reads the file as plain text and stores the text in the result property. The second argument, the encoding type, is optional.
  • readAsDataURL(file)—Reads the file and stores a data URI representing the files in the result property.
  • readAsBinaryString(file)—Reads the file and stores a string where each character represents a byte in the result property.
  • readAsArrayBuffer(file)—Reads the file and stores an ArrayBuffer containing the file contents in the result property.

These various ways of reading in a file allow for maximum flexibility in dealing with the file data. For instance, you may wish to read an image as a data URI in order to display it back to the user, or you may wish to read a file as text in order to parse it.

Because the read happens asynchronously, there are several events published by each FileReader. The three most useful events are progress, error, and load, which indicate when more data is available, when an error occurred, and when the file is fully read, respectively.

The progress event fires roughly every 50 milliseconds and has the same information available as the XHR progress event: lengthComputable, loaded, and total. Additionally, the FileReader's result property is readable during the progress event even though it may not contain all of the data yet.

The error event fires if the file cannot be read for some reason. When the error event fires, the error property of the FileReader is filled in. This object has a single property, code, which is an error code of 1 (file not found), 2 (security error), 3 (read was aborted), 4 (file isn't readable), or 5 (encoding error).

The load event fires when the file has been successfully loaded; it will not fire if the error event has fired. Here's an example using all three events:

let filesList = document.getElementById("files-list");
filesList.addEventListener("change", (event) => {
 let info = "",
   output = document.getElementById("output"),
   progress = document.getElementById("progress"),
   files = event.target.files,
   type = "default",
   reader = new FileReader();
  
 if (/image/.test(files[0].type)) {
   reader.readAsDataURL(files[0]);
   type = "image";
 } else {
   reader.readAsText(files[0]);
   type = "text";
 }
  
 reader.onerror = function() {
  output.innerHTML = "Could not read file, error code is " + 
    reader.error.code;
 };
 
 reader.onprogress = function(event) {
  if (event.lengthComputable) {
   progress.innerHTML = `${event.loaded}/${event.total}`;
  }
 };
 
 reader.onload = function() {
  let html = "";
  
  switch(type) {
   case "image":
    html = `<img src="${reader.result}">`;
    break;
   case "text":
    html = reader.result;
    break;
  }
  output.innerHTML = html;
 };
});

This code reads a file from a form field and displays it on the page. If the file has a MIME type indicating it's an image, then a data URI is requested and, upon load, this data URI is inserted as an image into the page. If the file is not an image, then it is read in as a string and output as is into the page. The progress event is used to track and display the bytes of data being read, while the error event watches for any errors.

You can stop a read in progress by calling the abort() method, in which case an abort event is fired. After the firing of load, error, or abort, an event called loadend is fired. The loadend event indicates that all reading has finished for any of the three reasons. The readAsText() and readAsDataURL() methods are supported across all implementing browsers.

The FileReaderSync Type

The FileReaderSync type, as its name suggests, is the synchronous version of FileReader. This features the same methods as FileReader but performs a blocking read of a file, only continuing execution after the entire file has been loaded into memory. FileReaderSync is only available inside of web workers, as the extremely slow process of reading an entire file would never be practical to use in the top-level execution environment.

Suppose a worker is sent a File object via postMessage(). The following code directs the worker to synchronously read the entire file into memory and send back the file's data URL:

// worker.js

self.omessage = (messageEvent) => {
 const syncReader = new FileReaderSync();
 console.log(syncReader); // FileReaderSync {}

 // Blocks worker thread while file is read
 const result = syncReader.readAsDataUrl(messageEvent.data);

 // Example response for PDF file
 console.log(result); // data:application/pdf;base64,JVBERi0xLjQK…

 // Send URL back up
 self.postMessage(result);
};

Blobs and Partial Reads

In some cases, you may want to read only parts of a file instead of the whole file. To that end, the File object has a method called slice(). The slice() method accepts two arguments: the starting byte and the number of bytes to read. This method returns an instance of Blob, which is actually the super type of File.

A “blob,” short for binary large object, is a JavaScript wrapper for immutable binary data. Blobs can be created from an array containing strings, ArrayBuffers, ArrayBufferViews, or even other Blobs. The Blob constructor can optionally be provided a MIME type as part of its options parameter:

console.log(new Blob(['foo'])); 
// Blob {size: 3, type: ""}

console.log(new Blob(['{"a": "b"}'], { type: 'application/json' }));
// {size: 10, type: "application/json"}

console.log(new Blob(['<p>Foo</p>', '<p>Bar</p>'], { type: 'text/html' }));
// {size: 20, type: "text/html"}

A Blob also has size and type properties, as well as the slice() method for further cutting down the data. You can read from a Blob by using a FileReader as well. This example reads just the first 32 bytes from a file:

let filesList = document.getElementById("files-list");
filesList.addEventListener("change", (event) => {
 let info = "",
  output = document.getElementById("output"),
  progress = document.getElementById("progress"),
  files = event.target.files,
  reader = new FileReader(),
  blob = blobSlice(files[0], 0, 32);
 if (blob) {
  reader.readAsText(blob);
  
  reader.onerror = function() {
   output.innerHTML = "Could not read file, error code is " + 
         reader.error.code;
  };
  reader.onload = function() {
   output.innerHTML = reader.result;
  };
 } else {
  console.log("Your browser doesn't support slice().");
 }
});

Reading just parts of a file can save time, especially when you're just looking for a specific piece of data, such as a file header.

Object URLs and Blobs

Object URLs, also sometimes called blob URLs, are URLs that reference data stored in a File or Blob. The advantage of object URLs is that you don't need to read the file contents into JavaScript in order to use them. Instead, you simply provide the object URL in the appropriate place. To create an object URL, use the window.URL.createObjectURL() method and pass in the File or Blob object. The return value of this function is a string that points to a memory address. Because the string is a URL, it can be used in the DOM. For example, the following displays an image file on the page:

let filesList = document.getElementById("files-list");
filesList.addEventListener("change", (event) => {
 let info = "",
  output = document.getElementById("output"),
  progress = document.getElementById("progress"),
  files = event.target.files,
  reader = new FileReader(),
  url = window.URL.createObjectURL(files[0]);
 if (url) {
  if (/image/.test(files[0].type)) {
   output.innerHTML = `<img src="${url}">`;
  } else {
    output.innerHTML = "Not an image.";
  }
 } else {
   output.innerHTML = "Your browser doesn't support object URLs.";
 }
});

By feeding the object URL directly into an <img> tag, there is no need to read the data into JavaScript first. Instead, the <img> tag goes directly to the memory location and reads the data into the page.

Once the data is no longer needed, it's best to free up the memory associated with it. Memory cannot be freed as long as an object URL is in use. You can indicate that the object URL is no longer needed by passing it to window.URL.revokeObjectURL(). All object URLs are freed from memory automatically when the page is unloaded. Still, it is best to free each object URL as it is no longer needed to ensure the memory footprint of the page remains as low as possible.

Drag-and-Drop File Reading

Combining the HTML5 Drag-and-Drop API with the File API allows you to create interesting interfaces for the reading of file information. After creating a custom drop target on a page, you can drag files from the desktop and drop them onto the drop target. This fires the drop event just like dragging and dropping an image or link would. The files being dropped are available on event.dataTransfer.files, which is a list of File objects just like those available on a file input field.

The following example prints out information about files that are dropped on a custom drop target in the page:

let droptarget = document.getElementById("droptarget");
function handleEvent(event) {
 let info = "",
  output = document.getElementById("output"),
  files, i, len;   
 event.preventDefault();
 
 if (event.type == "drop") {
  files = event.dataTransfer.files;
  i = 0;
  len = files.length;
 
  while (i < len) {
   info += `${files[i].name} (${files[i].type}, ${files[i].size} bytes)<br>`;
   i++;
  }
  
  output.innerHTML = info;
 }
}
droptarget.addEventListener("dragenter", handleEvent);
droptarget.addEventListener("dragover", handleEvent);
droptarget.addEventListener("drop", handleEvent);  

As with earlier drag-and-drop examples, you must cancel the default behavior of dragenter, dragover, and drop. During the drop event, the files become available on event.dataTransfer.files, and you can read their information at that time.

MEDIA ELEMENTS

With the explosive popularity of embedded audio and video on the Web, most content producers have been forced to use Flash for optimal cross-browser compatibility. HTML5 introduces two media-related elements to enable cross-browser audio and video embedding into a browser baseline without any plug-ins: <audio> and <video>.

Both of these elements allow web developers to easily embed media files into a page, as well as provide JavaScript hooks into common functionality, allowing custom controls to be created for the media. The elements are used as follows:

<!-- embed a video -->
<video src="conference.mpg" id="myVideo">Video player not available.</video>
<!-- embed an audio file -->
<audio src="song.mp3" id="myAudio">Audio player not available.</audio>

Each of these elements requires, at a minimum, the src attribute indicating the media file to load. You can also specify width and height attributes to indicate the intended dimensions of the video player and a poster attribute that is an image URI to display while the video content is being loaded. The controls attribute, if present, indicates that the browser should display a UI enabling the user to interact directly with the media. Any content between the opening and the closing tags is considered alternate content to display if the media player is unavailable.

You may optionally specify multiple different media sources because not all browsers support all media formats. To do so, omit the src attribute from the element and instead include one or more <source> elements, as in this example:

<!-- embed a video -->
<video id="myVideo">
 <source src="conference.webm" type="video/webm; codecs='vp8, vorbis'">
 <source src="conference.ogv" type="video/ogg; codecs='theora, vorbis'">
 <source src="conference.mpg">
 Video player not available.
</video>
<!-- embed an audio file -->
<audio id="myAudio">
 <source src="song.ogg" type="audio/ogg">
 <source src="song.mp3" type="audio/mpeg">
 Audio player not available.
</audio>

It's beyond the scope of this book to discuss the various codecs used with video and audio, but suffice to say that browsers support a varying range of codecs, so multiple source files are typically required.

Properties

The <video> and <audio> elements provide robust JavaScript interfaces. There are numerous properties shared by both elements that can be evaluated to determine the current state of the media, as described in the following table.

PROPERTY NAME DATA TYPE DESCRIPTION
autoplay Boolean Gets or sets the autoplay flag.
buffered TimeRanges An object indicating the buffered time ranges that have already been downloaded.
bufferedBytes ByteRanges An object indicating the buffered byte ranges that have already been downloaded.
bufferingRate Integer The average number of bits per second received from the download.
bufferingThrottled Boolean Indicates if the buffering has been throttled by the browser.
controls Boolean Gets or sets the controls attribute, which displays or hides the browser's built-in controls.
currentLoop Integer The number of loops that the media has played.
currentSrc String The URL for the currently playing media.
currentTime Float The number of seconds that have been played.
defaultPlaybackRate Float Gets or sets the default playback rate. By default, this is 1.0 seconds.
duration Float The total number of seconds for the media.
ended Boolean Indicates if the media has completely played.
loop Boolean Gets or sets whether the media should loop back to the start when finished.
muted Boolean Gets or sets if the media is muted.
networkState Integer Indicates the current state of the network connection for the media: 0 for empty, 1 for loading, 2 for loading meta data, 3 for loaded first frame, and 4 for loaded.
paused Boolean Indicates if the player is paused.
playbackRate Float Gets or sets the current playback rate. This may be affected by the user causing the media to play faster or slower, unlike defaultPlaybackRate, which remains unchanged unless the developer changes it.
played TimeRanges The range of times that have been played thus far.
readyState Integer Indicates if the media is ready to be played. Values are 0 if the data is unavailable, 1 if the current frame can be displayed, 2 if the media can begin playing, and 3 if the media can play from beginning to end.
seekable TimeRanges The ranges of times that are available for seeking.
seeking Boolean Indicates that the player is moving to a new position in the media file.
Src String The media file source. This can be rewritten at any time.
start Float Gets or sets the location in the media file, in seconds, where playing should begin.
totalBytes Integer The total number of bytes needed for the resource (if known).
videoHeight Integer Returns the height of the video (not necessarily of the element). Only for <video>.
videoWidth Integer Returns the width of the video (not necessarily of the element). Only for <video>.
volume Float Gets or sets the current volume as a value between 0.0 and 1.0.

Many of these properties can also be specified as attributes on either the <audio> or the <video> elements.

Events

In addition to the numerous properties, there are also numerous events that fire on these media elements. The events monitor all of the different properties that change because of media playback and user interaction with the player. These events are listed in the following table.

EVENT NAME FIRES WHEN
abort Downloading has been aborted.
canplay Playback can begin; readyState is 2.
canplaythrough Playback can proceed and should be uninterrupted; readyState is 3.
canshowcurrentframe The current frame has been downloaded; readyState is 1.
dataunavailable Playback can't happen because there's no data; readyState is 0.
durationchange The duration property value has changed.
emptied The network connection has been closed.
empty An error occurs that prevents the media download.
ended The media has played completely through and is stopped.
error A network error occurred during download.
load All of the media has been loaded. This event is considered deprecated; use canplaythrough instead.
loadeddata The first frame for the media has been loaded.
loadedmetadata The meta data for the media has been loaded.
loadstart Downloading has begun.
pause Playback has been paused.
play The media has been requested to start playing.
playing The media has actually started playing.
progress Downloading is in progress.
ratechange The speed at which the media is playing has changed.
seeked Seeking has ended.
seeking Playback is being moved to a new position.
stalled The browser is trying to download, but no data is being received.
timeupdate The currentTime is updated in an irregular or unexpected way.
volumechange The volume property value or muted property value has changed.
waiting Playback is paused to download more data.

These events are designed to be as specific as possible to enable web developers to create custom audio/video players using little more than HTML and JavaScript (as opposed to creating a new Flash movie).

Custom Media Players

You can manually control the playback of a media file, using the play() and pause() methods that are available on both <audio> and <video>. Combining the properties, events, and these methods makes it easy to create a custom media player, as shown in this example:

<div class="mediaplayer">
 <div class="video">
  <video id="player" src="movie.mov" poster="mymovie.jpg" 
      width="300" height="200">
   Video player not available.
  </video>
 </div>
 <div class="controls">
  <input type="button" value="Play" id="video-btn">
  <span id="curtime">0</span>/<span id="duration">0</span>
 </div>
</div>

This basic HTML can then be brought to life by using JavaScript to create a simple video player, as shown here:

// get references to the elements
let player = document.getElementById("player"),
 btn = document.getElementById("video-btn"),
 curtime = document.getElementById("curtime"),
 duration = document.getElementById("duration");
// update the duration
duration.innerHTML = player.duration;
      
// attach event handler to button
btn.addEventListener( "click", (event) => {
 if (player.paused) {
   player.play();
   btn.value = "Pause";
 } else {
   player.pause();
   btn.value = "Play";
 }
});
      
// update the current time periodically
setInterval(() => {
 curtime.innerHTML = player.currentTime;
}, 250);

The JavaScript code here simply attaches an event handler to the button that either pauses or plays the video, depending on its current state. Then, an event handler is set for the <video> element's load event so that the duration can be displayed. Last, a repeating timer is set to update the current time display. You can extend the behavior of this custom video player by listening for more events and making use of more properties. The exact same code can also be used with the <audio> element to create a custom audio player.

Codec Support Detection

As mentioned previously, not all browsers support all codecs for <video> and <audio>, which frequently means you must provide more than one media source. There is also a JavaScript API for determining if a given format and codec is supported by the browser. Both media elements have a method called canPlayType(), which accepts a format/codec string and returns a string value of "probably", "maybe", or "" (empty string). The empty string is a falsy value, which means you can still use canPlayType() in an if statement like this:

if (audio.canPlayType("audio/mpeg")) {
 // do something
}

Both "probably" and "maybe" are truthy values and so they get coerced to true within the context of an if statement.

When just a MIME type is provided to canPlayType(), the most likely return values are "maybe" and the empty string because a file is really just a container for audio or video data; it is the encoding that really determines if the file can be played. When both a MIME type and a codec are specified, you increase the likelihood of getting "probably" as the return value. Some examples:

let audio = document.getElementById("audio-player");
// most likely "maybe"
if (audio.canPlayType("audio/mpeg")) {
 // do something
}
// could be "probably"
if (audio.canPlayType("audio/ogg; codecs="vorbis"")) {
 // do something
}

Note that the codecs list must always be enclosed in quotes to work properly. You can also detect video formats using canPlayType() on any video element.

The Audio Type

The <audio> element also has a native JavaScript constructor called Audio to allow the playing of audio at any point in time. The Audio type is similar to Image in that it is the equivalent of a DOM element but doesn't require insertion into the document to work. Just create a new instance and pass in the audio source file:

let audio = new Audio("sound.mp3");
EventUtil.addHandler(audio, "canplaythrough", function(event) {
 audio.play();
});

Creating a new instance of Audio begins the process of downloading the specified filed. Once it's ready, you can call play() to start playing the audio.

Calling the play() method on iOS causes a dialog to pop up asking for the user's permission to play the sound. In order to play one sound after another, you must call play() immediately within the onfinish event handler.

NATIVE DRAG AND DROP

Internet Explorer 4 first introduced JavaScript support for drag-and-drop functionality for web pages. At the time, only two items on a web page could initiate a system drag: an image or some text. When dragging an image, you simply held the mouse button down and then moved it; with text, you first highlighted some text and then you could drag it the same way as you would drag an image. In Internet Explorer 4, the only valid drop target was a text box. In version 5, Internet Explorer extended its drag-and-drop capabilities by adding new events and allowing nearly anything on a web page to become a drop target. Version 5.5 went a little bit further by allowing nearly anything to become draggable. (Internet Explorer 6 supports this functionality as well.) HTML5 uses the Internet Explorer drag-and-drop implementation as the basis for its drag-and-drop specification. All major browsers have implemented native drag and drop according to the HTML5 spec.

Perhaps the most interesting thing about drag-and-drop support is that elements can be dragged across frames, browser windows, and sometimes, other applications. Drag-and-drop support in the browser allows you to tap into that functionality.

Drag-and-Drop Events

The events provided for drag and drop enable you to control nearly every aspect of a drag-and-drop operation. The tricky part is determining where each event is fired: some fire on the dragged item; others fire on the drop target. When an item is dragged, the following events fire (in this order):

  1. dragstart
  2. drag
  3. dragend

At the moment you hold a mouse button down and begin to move the mouse, the dragstart event fires on the item that is being dragged. The cursor changes to the no-drop symbol (a circle with a line through it), indicating that the item cannot be dropped on itself. You can use the ondragstart event handler to run JavaScript code as the dragging begins.

After the dragstart event fires, the drag event fires and continues firing as long as the object is being dragged. This is similar to mousemove, which also fires repeatedly as the mouse is moved. When the dragging stops (because you drop the item onto either a valid or an invalid drop target), the dragend event fires.

The target of all three events is the element that is being dragged. By default, the browser does not change the appearance of the dragged element while a drag is happening, so it's up to you to change the appearance. Most browsers do, however, create a semitransparent clone of the element being dragged that always stays immediately under the cursor.

When an item is dragged over a valid drop target, the following sequence of events occurs:

  1. dragenter
  2. dragover
  3. dragleave or drop

The dragenter event (similar to the mouseover event) fires as soon as the item is dragged over the drop target. Immediately after the dragenter event fires, the dragover event fires and continues to fire as the item is being dragged within the boundaries of the drop target. When the item is dragged outside of the drop target, dragover stops firing and the dragleave event is fired (similar to mouseout). If the dragged item is actually dropped on the target, the drop event fires instead of dragleave. The target of these events is the drop target element.

Custom Drop Targets

When you try to drag something over an invalid drop target, you see a special cursor (a circle with a line through it) indicating that you cannot drop. Even though all elements support the drop target events, the default is to not allow dropping. If you drag an element over something that doesn't allow a drop, the drop event will never fire regardless of the user action. However, you can turn any element into a valid drop target by overriding the default behavior of both the dragenter and the dragover events. For example, if you have a <div> element with an ID of "droptarget", you can use the following code to turn it into a drop target:

let droptarget = document.getElementById("droptarget");
      
droptarget.addEventListener("dragover", (event) => {
 event.preventDefault();
});
      
droptarget.addEventListener("dragenter", (event) => {
 event.preventDefault();
});

After making these changes, you'll note that the cursor now indicates that a drop is allowed over the drop target when dragging an element. Also, the drop event will fire.

In Firefox, the default behavior for a drop event is to navigate to the URL that was dropped on the drop target. That means dropping an image onto the drop target will result in the page navigating to the image file, and text that is dropped on the drop target results in an invalid URL error. For Firefox support, you must also cancel the default behavior of the drop event to prevent this navigation from happening:

droptarget.addEventListener("drop", (event) => {
 event.preventDefault();
});

The dataTransfer Object

Simply dragging and dropping isn't of any use unless data is actually being affected. To aid in the transmission of data via a drag-and-drop operation, Internet Explorer 5 introduced the dataTransfer object, which exists as a property of event and is used to transfer string data from the dragged item to the drop target. Because it is a property of event, the dataTransfer object doesn't exist except within the scope of an event handler for a drag-and-drop event. Within an event handler, you can use the object's properties and methods to work with your drag-and-drop functionality. The dataTransfer object is now part of the working draft of HTML5.

The dataTransfer object has two primary methods: getData() and setData(). As you might expect, getData() is capable of retrieving a value stored by setData(). The first argument for setData(), and the only argument of getData(), is a string indicating the type of data being set: either "text" or "URL", as shown here:

// working with text
event.dataTransfer.setData("text", "some text");
let text = event.dataTransfer.getData("text");
      
// working with a URL
event.dataTransfer.setData("URL", "http://www.wrox.com/");
let url = event.dataTransfer.getData("URL");

Even though Internet Explorer started out by introducing only "text" and "URL" as valid data types, HTML5 extends this to allow any MIME type to be specified. The values "text" and "URL" will be supported by HTML5 for backwards compatibility, but they are mapped to "text/plain" and "text/uri-list".

The dataTransfer object can contain exactly one value of each MIME type, meaning that you can store both text and a URL at the same time without overwriting either. The data stored in the dataTransfer object is available only until the drop event. If you do not retrieve the data in the ondrop event handler, the dataTransfer object is destroyed and the data is lost.

When you drag text from a text box, the browser calls setData() and stores the dragged text in the "text" format. Likewise, when a link or image is dragged, setData() is called and the URL is stored. It is possible to retrieve these values when the data is dropped on a target by using getData(). You can also call setData() manually during the dragstart event to store custom data that you may want to retrieve later.

There is a difference between data treated as text and data treated as a URL. When you specify data to be stored as text, it gets no special treatment whatsoever. When you specify data to be stored as a URL, however, it is treated just like a link on a web page, meaning that if you drop it onto another browser window, the browser will navigate to that URL.

Firefox through version 5 doesn't properly alias "url" to "text/uri-list" or "text" to "text/plain". It does, however, alias "Text" (uppercase T) to "text/plain". For best cross-browser compatibility of retrieving data from dataTransfer, you'll need to check for two values for URLs and use "Text" for plain text:

let dataTransfer = event.dataTransfer;
// read a URL
let url = dataTransfer.getData("url") || dataTransfer.getData("text/uri-list");
// read text
let text = dataTransfer.getData("Text");

It's important that the shortened data name be tried first because Internet Explorer through version 10 doesn't support the extended names and also throws an error when the data name isn't recognized.

dropEffect and effectAllowed

The dataTransfer object can be used to do more than simply transport data to and fro; it can also be used to determine what type of actions can be done with the dragged item and the drop target. You accomplish this by using two properties: dropEffect and effectAllowed.

The dropEffect property is used to tell the browser which type of drop behaviors are allowed. This property has the following four possible values:

  • "none"—A dragged item cannot be dropped here. This is the default value for everything except text boxes.
  • "move"—The dragged item should be moved to the drop target.
  • "copy"—The dragged item should be copied to the drop target.
  • "link"—Indicates that the drop target will navigate to the dragged item (but only if it is a URL).

Each of these values causes a different cursor to be displayed when an item is dragged over the drop target. It is up to you, however, to actually cause the actions indicated by the cursor. In other words, nothing is automatically moved, copied, or linked without your direct intervention. The only thing you get for free is the cursor change. In order to use the dropEffect property, you must set it in the ondragenter event handler for the drop target.

The dropEffect property is useless, unless you also set the effectAllowed. This property indicates which dropEffect is allowed for the dragged item. The possible values are as follows:

  • "uninitialized"—No action has been set for the dragged item.
  • "none"—No action is allowed on the dragged item.
  • "copy"—Only dropEffect "copy" is allowed.
  • "link"—Only dropEffect "link" is allowed.
  • "move"—Only dropEffect "move" is allowed.
  • "copyLink"dropEffect "copy" and "link" are allowed.
  • "copyMove"dropEffect "copy" and "move" are allowed.
  • "linkMove"dropEffect "link" and "move" are allowed.
  • "all"—All dropEffect values are allowed.

This property must be set inside the ondragstart event handler.

Suppose that you want to allow a user to move text from a text box into a <div>. To accomplish this, you must set both dropEffect and effectAllowed to "move". The text won't automatically move itself because the default behavior for the drop event on a <div> is to do nothing. If you override the default behavior, the text is automatically removed from the text box. It is then up to you to insert it into the <div> to finish the action. If you were to change dropEffect and effectAllowed to "copy", the text in the text box would not automatically be removed.

Draggability

By default, images, links, and text are draggable, meaning that no additional code is necessary to allow a user to start dragging them. Text is draggable only after a section has been highlighted, while images and links may be dragged at any point in time.

It is possible to make other elements draggable. HTML5 specifies a draggable property on all HTML elements indicating if the element can be dragged. Images and links have draggable automatically set to true, whereas everything else has a default value of false. This property can be set in order to allow other elements to be draggable or to ensure that an image or link won't be draggable. For example:

<!-- turn off dragging for this image -->
<img src="smile.gif" draggable="false" alt="Smiley face">
<!-- turn on dragging for this element -->
<div draggable="true">…</div>

Additional Members

The HTML5 specification indicates the following additional methods on the dataTransfer object:

  • addElement(element)—Adds an element to the drag operation. This is purely for data purposes and doesn't affect the appearance of the drag operation. As of the time of this writing, no browsers have implemented this method.
  • clearData(format)—Clears the data being stored with the particular format.
  • setDragImage(element, x, y)—Allows you to specify an image to be displayed under the cursor as the drag takes place. This method accepts three arguments: an HTML element to display and the x- and y-coordinates on the image where the cursor should be positioned. The HTML element may be an image, in which case the image is displayed, or any other element, in which case a rendering of the element is displayed.
  • types—A list of data types currently being stored. This collection acts like an array and stores the data types as strings such as "text".

NOTIFICATIONS API

The Notifications API, as its name suggests, is used to display notifications to the user. In many ways, notifications are similar to alert() dialogue boxes: both use a JavaScript API to trigger browser behavior outside of the page itself, and both allow the page to handle the various ways in which users interact with the dialogue boxes or notification tiles. Notifications, however, offer a far greater degree of customizability.

The Notifications API is especially useful in the context of service workers. It allows a progressive web application (PWA) to behave more like a native app by triggering notifications to show even when a browser page is not active.

Notification Permissions

The Notification API has the potential for abuse, so it enforces two security features by default:

  • Notifications can be triggered only by code executing in a secure context.
  • Notifications must be explicitly allowed by the user on a per-origin basis.

The user grants notification permission to an origin inside a browser dialog box. Unless the user declines to explicitly allow or deny permissions, this permission request can only happen a single time per domain: the browser will remember the user's choice, and if denied there is no redress.

The page can ask for notification permission using the Notification global object. This object features a requestPemission() method that returns a promise, which resolves when the user takes action on the permission dialog box.

Notification.requestPermission()
  .then((permission) => {
   console.log('User responded to permission request:', permission);
  });

A value of granted means the user explicitly granted permission to show notifications. Any other value indicates that attempts to show a notification will silently fail. If a user denies permission, the value will be denied. There is no programmatic redress for this, as it is not possible to re-trigger the permissions prompt.

Showing and Hiding Notification

The Notification constructor is used to create and show notifications. The simplest form of a notification is with only a title string that is passed as the first required parameter to the constructor. When the constructor is called in this way, the notification will display immediately:

new Notification('Title text!');

Notifications are highly customizable with the options parameter. Settings such as the notification body, images, and vibration are all controllable via this object:

new Notification('Title text!', {
 body: 'Body text!',
 image: 'path/to/image.png',
 vibrate: true 
});

The Notification object returned from the constructor can be used to close an active notification using a close() method. The following example opens a notification and then closes it after 1000 milliseconds:

const n = new Notification('I will close in 1000ms');
setTimeout(() => n.close(), 1000);

Notification Lifecycle Callbacks

Notifications aren't always just for displaying text strings; they're also designed to be interactive. The Notification API offers four lifecycle hooks for attaching callbacks:

  • onshow is triggered when the notification is displayed.
  • onclick is triggered when the notification is clicked.
  • onclose is triggered when the notification is dismissed or closed via close().
  • onerror is triggered when an error occurs that prevents the notification from being displayed.

The following notification logs a message upon each lifecycle event:

const n = new Notification('foo');

n.onshow = () => console.log('Notification was shown!'); 
n.onclick = () => console.log('Notification was clicked!');
n.onclose = () => console.log('Notification was closed!');
n.onerror = () => console.log('Notification experienced an error!');

PAGE VISIBILITY API

A major pain point for web developers is knowing when users are actually interacting with the page. If a page is minimized or hidden behind another tab, it may not make sense to continue functionality such as polling the server for updates or performing animations. The Page Visibility API aims to give developers information about whether or not the page is visible to the user.

The API itself is very simple, consisting of three parts:

  • document.visibilityState—A value indicating one of four states:
    • The page is in a background tab or the browser is minimized.
    • The page is in the foreground tab.
    • The actual page is hidden, but a preview of the page is visible (such as in Windows 7 when moving the mouse over an icon in the taskbar shows a preview).
    • The page is being prerendered off-screen.
  • visibilitychange event—This event fires when a document changes from hidden to visible, or vice versa.
  • document.hidden—A Boolean value indicating if the page is hidden from view. This may mean the page is in a background tab or that the browser is minimized. This value is supported for backwards compatibility, you should use document.visibilityState to assess if the page is visible or not.

To be notified when the page changes from visible to hidden or hidden to visible, you can listen for the visibilitychange event.

document.visibilityState is one of three possible string values:

  • hidden
  • visible
  • prerender

STREAMS API

The Streams API is the answer to a simple but fundamental question: How can a web application consume information in sequential chunks rather than in bulk? This capability is massively useful in two main ways:

  • A block of data may not be available all at once. A perfect example of this is a response to a network request. Network payloads are delivered as a sequence of packets, and stream processing can allow an application to use network-delivered data as it becomes available rather than waiting for the full payload to finish loading.
  • A block of data can be processed in small portions. Video processing, data decompression, image decoding, and JSON parsing are all examples of computation that is localized to a portion of a data block and does not require it to be in memory all at once.

The “Network Requests and Remote Resources” chapter covers how the Streams API is involved with fetch(), but the Streams API is totally generalizable. JavaScript libraries that implement Observables share many fundamental concepts with streams.

Introduction to Streams

When thinking about streams, imagining the data as a liquid flowing through pipes is an apt mental framework. JavaScript streams borrow heavily from the plumbing lexicon due to their substantial conceptual overlap. Per the Streams specification, “These APIs have been designed to efficiently map to low-level I/O primitives, including specializations for byte streams where appropriate.” Two common tasks that the Stream API directly addresses are handling network requests and reading/writing to disk.

The Streams API features three types of streams:

  • Readable streams are streams from which chunks can be read via a public interface. Data enters the stream internally from an underlying source and is processed by a consumer.
  • Writable streams are streams to which chunks can be written via a public interface. A producer writes data into the stream, and that data is passed internally in an underlying sink.
  • Transform streams are made up of two streams: a writable stream to accept input data (the writable side), and a readable stream to emit output data (the readable side). In between these two streams is the transformer, which can be used to inspect and modify the stream data as necessary.

Chunks, Internal Queues, and Backpressure

The fundamental unit in streams is the chunk. A chunk can be of any data type, but frequently it will take the form of a typed array. Each chunk is a discrete segment of the stream that can be handled in its entirety. Importantly, chunks do not have a fixed size or arrive at fixed intervals. In an ideal stream, chunks will generally be approximately the same size and arrive at approximately regular intervals, but any good stream implementation should be prepared to handle edge cases.

For all types of streams, there is a shared concept of an entrance and exit to the stream. There will sometimes be a mismatch between the rate of data entering and exiting the stream. This stream balance can take one of three forms:

  • The exit of the stream can process data faster than the data is provided at the entrance. The stream exit will often be idle (which may indicate potential inefficiencies at the stream entrance), but there is little wasted memory or computation so this stream imbalance is acceptable.
  • The stream entrance and exit are in equilibrium. This balance is ideal.
  • The entrance of the stream can provide data faster than the exit can process it. This stream imbalance is inherently problematic. There will necessarily be a backlog of data somewhere, and streams must handle this accordingly.

Stream imbalance is a common problem, but streams are tooled to address it. All streams maintain an internal queue of chunks that have entered the stream but not yet exited. For a stream in equilibrium, the internal queue will consistently have zero or a small number of enqueued chunks because the stream exit is dequeuing chunks approximately as fast as they can be enqueued. The memory footprint of such a stream's internal queue will remain relatively small.

When chunks are enqueued faster than they can be dequeued, the size of the internal queue will grow. The stream cannot allow its internal queue to grow indefinitely, so it uses backpressure to signal the stream entrance to stop sending data until the queue size returns below a predetermined threshold. This threshold is governed by a queueing strategy that defines the high water mark, the maximum memory footprint of the internal queue.

Readable Streams

Readable streams are a wrapper for an underlying source of data. This underlying source is able to feed its data into the stream and allow that data to be read from the stream's public interface.

Using the ReadableStreamDefaultController

Consider the following generator, which produces an incremented integer every 1000 milliseconds:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

These values can be passed into a readable stream via its controller. The simplest way to access a controller is by creating a new instance of a ReadableStream, defining a start() method inside the constructor's underlyingSource parameter, and using the controller parameter passed to that method. By default, the controller parameter is an instance of ReadableStreamDefaultController:

const readableStream = new ReadableStream({
 start(controller) {
  console.log(controller); // ReadableStreamDefaultController {}
 }
});

Use the enqueue() method to pass values into the controller. Once all the values have been passed, the stream is closed using close():

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const readableStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of ints()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});

Using the ReadableStreamDefaultReader

This example so far successfully enqueues five values in the stream, but there is nothing reading them out of that queue. To do so, a ReadableStreamDefaultReader instance can be acquired from the stream with getReader(). This will obtain a lock on the stream, ensuring only this reader can read values from that stream:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const readableStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of ints()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});

console.log(readableStream.locked); // false
const readableStreamDefaultReader = readableStream.getReader();
console.log(readableStream.locked); // true

A consumer can get values from this reader instance using its read() method:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const readableStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of ints()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});

console.log(readableStream.locked); // false
const readableStreamDefaultReader = readableStream.getReader();
console.log(readableStream.locked); // true

// Consumer
(async function() {
 while(true) {
  const { done, value } = await readableStreamDefaultReader.read();

  if (done) {
    break;
  } else {
    console.log(value);
  }
 }
})();

// 0
// 1
// 2
// 3
// 4

Writable Streams

Writable streams are a wrapper for an underlying sink for data. This underlying sink handles data from the stream's public interface.

Creating a WriteableStream

Consider the following generator, which produces an incremented integer every 1000 milliseconds:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

These values can be written to a writable stream via its public interface. When the public write() method is invoked, the write() method defined on the underlyingSink object passed to the constructor is also called:

const readableStream = new ReadableStream({
 write(value) {
  console.log(value);
 }
});

Using a WritableStreamDefaultWriter

To write values to this stream, a WritableStreamDefaultWriter instance can be acquired from the stream with getWriter(). This will obtain a lock on the stream, ensuring only this writer can write values to that stream:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const writableStream = new WritableStream({
 write(value) {
  console.log(value);
 }
});

console.log(writableStream.locked); // false
const writableStreamDefaultWriter = writableStream.getWriter();
console.log(writableStream.locked); // true

Before writing values to the stream, the producer must ensure the writer is capable of accepting values. WritableStreamDefaultWriter.ready returns a promise that resolves when the writer is ready for writing values to the stream. Following this, values can be passed with write() until the data stream is complete, at which point the stream can be terminated with close():

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const writableStream = new WritableStream({
 write(value) {
  console.log(value);
 }
});

console.log(writableStream.locked); // false
const writableStreamDefaultWriter = writableStream.getWriter();
console.log(writableStream.locked); // true


// Producer
(async function() {
 for await (let chunk of ints()) {
  await writableStreamDefaultWriter.ready;
  writableStreamDefaultWriter.write(chunk);
 }
   
 writableStreamDefaultWriter.close();
})();

Transform Streams

Transform streams combine both a readable stream and a writeable stream. In between the two streams is the transform() method, which is the intermediate point at which the chunk transformation occurs.

Consider the following generator, which produces an incremented integer every 1000 milliseconds:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

A TransformStream, which doubles the values emitted from this generator, can be defined as follows:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const { writable, readable } = new TransformStream({
 transform(chunk, controller) {
  controller.enqueue(chunk * 2);
 }
});

Passing data into and pulling data out of the transform stream's component streams can be accomplished identically to the previous readable stream and writable stream sections of this chapter:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const { writable, readable } = new TransformStream({
 transform(chunk, controller) {
  controller.enqueue(chunk * 2);
 }
});

const readableStreamDefaultReader = readable.getReader();
const writableStreamDefaultWriter = writable.getWriter();

// Consumer
(async function() {
 while (true) {
  const { done, value } = await readableStreamDefaultReader.read();

  if (done) {
    break;
  } else {
    console.log(value);
  }
 }
})();

// Producer
(async function() {
 for await (let chunk of ints()) {
  await writableStreamDefaultWriter.ready;
  writableStreamDefaultWriter.write(chunk);
 }
   
 writableStreamDefaultWriter.close();
})();

Piping Streams

Streams can be piped into one another to form a chain. One common form of this is to pipe a ReadableStream into a TransformStream using the pipeThrough() method. Internally, the initial ReadableStream passes its values into the TransformStream's internal WritableStream, the stream performs the transformation, and the transformed values emerge from the new ReadableStream endpoint. Consider the following example where a ReadableStream of integers is passed through a TransformStream that doubles each value:

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const integerStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of ints()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});

const doublingStream = new TransformStream({
 transform(chunk, controller) {
  controller.enqueue(chunk * 2);
 }
});

// Perform stream piping
const pipedStream = integerStream.pipeThrough(doublingStream);

// Acquire reader on output of piped streams
const pipedStreamDefaultReader = pipedStream.getReader();

// Consumer
(async function() {
 while(true) {
  const { done, value } = await pipedStreamDefaultReader.read();

  if (done) {
    break;
  } else {
    console.log(value);
  }
 }
})();

// 0
// 2
// 4
// 6
// 8

It's also possible to pipe a ReadableStream to a WritableStream using the pipeTo() method. This behaves in a similar fashion to pipeThrough():

async function* ints() {
 // yield an incremented integer every 1000ms
 for (let i = 0; i < 5; ++i) {
  yield await new Promise((resolve) => setTimeout(resolve, 1000, i));
 }
}

const integerStream = new ReadableStream({
 async start(controller) {
  for await (let chunk of ints()) {
   controller.enqueue(chunk);
  }

  controller.close();
 }
});

const writableStream = new WritableStream({
 write(value) {
  console.log(value);
 }
});

const pipedStream = integerStream.pipeTo(writableStream);

// 0
// 1
// 2
// 3
// 4

Notice here that the piping operation is implicitly acquiring a reader from the ReadableStream and feeding the produced values into the WritableStream.

TIMING APIs

Page performance is always an area of concern for web developers. The Performance interface changes that by exposing internal browser metrics through a JavaScript API, allowing developers to directly access this information and do as they please with it. This interface is available through the window.performance object. All metrics related to the page, both those already defined and those in the future, exist on this object.

The performance interface is composed of a number of APIs, most of which have two specification levels:

High Resolution Time API

The Date.now() method is only useful for datetime operations, which do not require precision timekeeping. In the following example, a timestamp is captured before and after a function foo() is invoked:

const t0 = Date.now();
foo();
const t1 = Date.now();

const duration = t1 – t0;

console.log(duration);

Consider the following scenarios where duration has an unexpected value:

  • duration is 0.Date.now() only has millisecond precision, and both timestamps will capture the same value if foo() executes quickly enough.
  • duration is negative or enormous. If the system clock is adjusted backwards or forwards while foo() executes (such as during daylight saving time), the captured timestamps will not account for this and the difference will incorporate the adjustment.

For these reasons, a different time measurement API must be used to precisely and accurately measure the passage of time. To address these needs, the High Resolution Time API defines window.performance.now(), which returns a floating point number with up to microsecond precision. As a result, it is much more unlikely to sequentially capture timestamps and have them be identical. This method also guarantees monotonically increasing timestamps.

const t0 = performance.now();
const t1 = performance.now();

console.log(t0);    // 1768.625000026077
console.log(t1);    // 1768.6300000059418

const duration = t1 – t0;

console.log(duration); // 0.004999979864805937

The performance.now() timer is a relative measurement. It begins counting at 0 when its execution context is spawned: for example, when a page is opened or when a worker is created. Because the timer initialization will be offset between contexts, direct comparison of performance.now() values across execution contexts is not possible without a shared reference point. The performance.timeOrigin property returns the global system clock's value when the timer initialization occurred.

const relativeTimestamp = performance.now();

const absoluteTimestamp = performance.timeOrigin + relativeTimestamp;

console.log(relativeTimestamp); // 244.43500000052154
console.log(absoluteTimestamp); // 1561926208892.4001

Performance Timeline API

The Performance Timeline API extends the Performance interface with a suite of tools intended to measure client-side latency. Performance measurements will almost always take the form of calculating the difference between an end and a start time. These start and end times are recorded as DOMHighResTimeStamp values, and the objects that wrap these timestamps are PerformanceEntry instances.

The browser automatically records a variety of different PerformanceEntry objects, and it is also possible to record your own with performance.mark(). All recorded entries recorded in an execution context can be accessed using performance.getEntires():

console.log(performance.getEntries());

// [PerformanceNavigationTiming, PerformanceResourceTiming, … ]

This collection represents the browser's performance timeline. Every PerformanceEntry object features a name, entryType, startTime, and duration property:

const entry = performance.getEntries()[0];

console.log(entry.name);  // "https://foo.com"
console.log(entry.entryType); // navigation
console.log(entry.startTime); // 0
console.log(entry.duration); // 182.36500001512468

However, PerformanceEntry is effectively an abstract base class, as recorded entries will always inherit from PerformanceEntry but ultimately exist as one of the following classes:

  • PerformanceMark
  • PerformanceMeasure
  • PerformanceFrameTiming
  • PerformanceNavigationTiming
  • PerformanceResourceTiming
  • PerformancePaintTiming

Each of these types adds a substantial number of properties that describe metadata involving what the entry represents. The name and entryType property for an instance will differ based on its type.

User Timing API

The User Timing API allows you to record and analyze custom performance entries. Recording a custom performance entry is accomplished with performance.mark():

performance.mark('foo');

console.log(performance.getEntriesByType('mark')[0]);
// PerformanceMark {
//  name: "foo", 
//  entryType: "mark", 
//  startTime: 269.8800000362098, 
//  duration: 0
// }

Creating two performance entries on either side of the computation allows you to calculate the time delta. The newest marks are pushed onto the beginning of the array returned from getEntriesByType():

performance.mark('foo');
for (let i = 0; i < 1E6; ++i) {}
performance.mark('bar');

const [endMark, startMark] = performance.getEntriesByType('mark');
console.log(startMark.startTime - endMark.startTime); // 1.3299999991431832

It's also possible to generate a PerformanceMeasure entry that corresponds to the duration of time between two marks identified via their name. This is accomplished with performance.measure():

performance.mark('foo');
for (let i = 0; i < 1E6; ++i) {}
performance.mark('bar');

performance.measure('baz', 'foo', 'bar');

const [differenceMark] = performance.getEntriesByType('measure');

console.log(differenceMark);
// PerformanceMeasure {
//  name: "baz", 
//  entryType: "measure", 
//  startTime: 298.9800000214018, 
//  duration: 1.349999976810068
// }

Navigation Timing API

The Navigation Timing API offers high-precision timestamps for metrics covering how quickly the current page loaded. The browser automatically records a PerformanceNavigationTiming entry when a navigation event occurs. This object captures a broad range of timestamps describing how and when the page loaded.

The following example calculates the amount of time between the loadEventStart and loadEventEnd timestamps:

const [performanceNavigationTimingEntry] = performance.getEntriesByType('navigation');

console.log(performanceNavigationTimingEntry);
// PerformanceNavigationTiming {
//  connectEnd: 2.259999979287386
//  connectStart: 2.259999979287386
//  decodedBodySize: 122314
//  domComplete: 631.9899999652989
//  domContentLoadedEventEnd: 300.92499998863786
//  domContentLoadedEventStart: 298.8950000144541
//  domInteractive: 298.88499999651685
//  domainLookupEnd: 2.259999979287386
//  domainLookupStart: 2.259999979287386
//  duration: 632.819999998901
//  encodedBodySize: 21107
//  entryType: "navigation"
//  fetchStart: 2.259999979287386
//  initiatorType: "navigation"
//  loadEventEnd: 632.819999998901
//  loadEventStart: 632.0149999810383
//  name: "https://foo.com"
//  nextHopProtocol: "h2"
//  redirectCount: 0
//  redirectEnd: 0
//  redirectStart: 0
//  requestStart: 7.7099999762140214
//  responseEnd: 130.50999998813495
//  responseStart: 127.16999999247491
//  secureConnectionStart: 0
//  serverTiming: []
//  startTime: 0
//  transferSize: 21806
//  type: "navigate"
//  unloadEventEnd: 132.73999997181818
//  unloadEventStart: 132.41999997990206
//  workerStart: 0
// }


console.log(performanceNavigationTimingEntry.loadEventEnd – 
            performanceNavigationTimingEntry.loadEventStart);
// 0.805000017862767

Resource Timing API

The Resource Timing API offers high-precision timestamps for metrics covering how quickly resources requested for the current page loaded. The browser automatically records a PerformanceResourceTiming entry when an asset is loaded. This object captures a broad range of timestamps describing how quickly that resource loaded.

The following example calculates the amount of time it took to load a specific resource:

const performanceResourceTimingEntry = performance.getEntriesByType('resource')[0];

console.log(performanceResourceTimingEntry);// PerformanceResourceTiming {
//  connectEnd: 138.11499997973442
//  connectStart: 138.11499997973442
//  decodedBodySize: 33808
//  domainLookupEnd: 138.11499997973442
//  domainLookupStart: 138.11499997973442
//  duration: 0
//  encodedBodySize: 33808
//  entryType: "resource"
//  fetchStart: 138.11499997973442
//  initiatorType: "link"
//  name: "https://static.foo.com/bar.png",
//  nextHopProtocol: "h2"
//  redirectEnd: 0
//  redirectStart: 0
//  requestStart: 138.11499997973442
//  responseEnd: 138.11499997973442
//  responseStart: 138.11499997973442
//  secureConnectionStart: 0
//  serverTiming: []
//  startTime: 138.11499997973442
//  transferSize: 0
//  workerStart: 0
// }

console.log(performanceResourceTimingEntry.responseEnd – 
            performanceResourceTimingEntry.requestStart);
// 493.9600000507198

Using the difference between various times can give you a good idea about how a page is being loaded into the browser and where the potential bottlenecks are hiding.

WEB COMPONENTS

The term "web components" refers to a handful of tools designed to enhance DOM behavior: shadow DOM, custom elements, and HTML templates. This collection of browser APIs is particularly messy:

  • There is no single "Web Components" specification: each web component is defined in a different specification.
  • Several web components, such as shadow DOM and custom elements, have undergone backwards-incompatible versioning.
  • Adoption across browser vendors is extremely inconsistent.

Because of these issues, adopting web components often demands a web component library, such as Polymer (https://www.polymer-project.org/), to polyfill and emulate missing web components in the browser.

HTML Templates

Before web components, there wasn't a particularly good way of writing HTML that would allow the browser to build a DOM subtree from parsed HTML but decline to render that subtree until instructed to do so. One workaround was to use innerHTML to convert a markup string into DOM elements, but this strategy has serious security implications. Another workaround was to construct each element using document.createElement() and progressively attach them to an orphaned root node (not attached to the DOM), but doing so is quite laborious and circumnavigates using markup at all.

Instead, it would be far better to write special markup in the page that the browser automatically parses into a DOM subtree but skips rendering. This is the core idea of HTML templates, which use the <template> tag for precisely this purpose. A simple example of an HTML template is as follows:

<template id="foo">
 <p>I'm inside a template!</p>
</template>

Using a DocumentFragment

When rendered inside a browser, you will not see this text render on the page. Because the <template> content is not considered part of the active document, DOM matching methods such as document.querySelector() will be unable to find the <p> tag. This is because it exists inside a new Node subclass added as part of HTML templates: the DocumentFragment.

The DocumentFragment inside a <template> is visible when inspecting inside the browser:

<template id="foo">
 #document-fragment
  <p>I'm inside a template!</p>
</template>

A reference to this DocumentFragment can be retrieved via the content property of the <template> element:

console.log(document.querySelector('#foo').content); // #document-fragment

The DocumentFragment behaves as a minimal document object for that subtree. For example, DOM matching methods on a DocumentFragmentcan find nodes in its subtree:

const fragment = document.querySelector('#foo').content;

console.log(document.querySelector('p')); // null 
console.log(fragment.querySelector('p')); // <p>…<p>

A DocumentFragment is also incredibly useful for adding HTML in bulk. Consider a scenario where one wishes to add multiple children to an HTML element as efficiently as possible. Using consecutive document.appendChild() calls for each child is painstaking and can potentially incur multiple reflows. Using a DocumentFragment allows for you to batch these child additions, guaranteeing at most a single reflow:

// Start state:
// <div id="foo"></div>
//
// Desired end state:
// <div id="foo">
//  <p></p>
//  <p></p>
//  <p></p>
// </div>

// Also can use document.createDocumentFragment()
const fragment = new DocumentFragment();

const foo = document.querySelector('#foo');

// Adding children to a DocumentFragment incurs no reflow
fragment.appendChild(document.createElement('p'));
fragment.appendChild(document.createElement('p'));
fragment.appendChild(document.createElement('p'));

console.log(fragment.children.length); // 3

foo.appendChild(fragment);

console.log(fragment.children.length); // 0

console.log(document.body.innerHTML);
// <div id="foo">
//  <p></p>
//  <p></p>
//  <p></p>
// </div>

Using <template> tags

Notice in the previous example how the child nodes of the DocumentFragment are effectively transferred onto the foo element, leaving the DocumentFragment empty. This same procedure can be replicated using a <template>:

const fooElement = document.querySelector('#foo');
const barTemplate = document.querySelector('#bar');
const barFragment = barTemplate.content;

console.log(document.body.innerHTML);
// <div id="foo">
// </div>
// <template id="bar">
//  <p></p>
//  <p></p>
//  <p></p>
// </template>

fooElement.appendChild(barFragment);

console.log(document.body.innerHTML);
// <div id="foo">
//  <p></p>
//  <p></p>
//  <p></p>
// </div>
// <tempate id="bar"></template>

If you wish to instead copy the template, a simple importNode() can be used to clone the DocumentFragment:

const fooElement = document.querySelector('#foo');
const barTemplate = document.querySelector('#bar');
const barFragment = barTemplate.content;

console.log(document.body.innerHTML);
// <div id="foo">
// </div>
// <template id="bar">
//  <p></p>
//  <p></p>
//  <p></p>
// </template>

fooElement.appendChild(document.importNode(barFragment, true));

console.log(document.body.innerHTML);
// <div id="foo">
//  <p></p>
//  <p></p>
//  <p></p>
// </div>
// <template id="bar">
//  <p></p>
//  <p></p>
//  <p></p>
// </template>

Template Scripts

Script execution will be deferred until the DocumentFragment is added to the real DOM tree. This is demonstrated here:

// Page HTML:
//
// <div id="foo"></div>
// <template id="bar">
//  <script>console.log('Template script executed');</script>
// </template>

const fooElement = document.querySelector('#foo');
const barTemplate = document.querySelector('#bar');
const barFragment = barTemplate.content;

console.log('About to add template');
fooElement.appendChild(barFragment);
console.log('Added template');

// About to add template
// Template script executed
// Added template

This is useful in situations where addition of the new elements requires some initialization.

Shadow DOM

Conceptually, the shadow DOM web component is fairly straightforward: It allows you to attach an entirely separate DOM tree as a node on a parent DOM tree. This allows for DOM encapsulation, meaning that things like CSS styling and CSS selectors can be restricted to a shadow DOM subtree instead of the entire top-level DOM tree.

Shadow DOM is similar to HTML templates in that both are a document-like structure that enables a degree of separation from the top-level DOM. However, shadow DOM is distinct from HTML templates in that shadow DOM content is actually rendered on the page, whereas HTML template content is not.

Introduction to Shadow DOM

Consider a scenario where you have multiple similarly structured DOM subtrees:

<div>
 <p>Make me red!</p>
</div>
<div>
 <p>Make me blue!</p>
</div>
<div>
 <p>Make me green!</p>
</div>

As you've likely surmised from the text nodes, each of these three DOM subtrees should be assigned different colors. Normally, in order to apply a style uniquely to each of these without resorting to the style attribute, you'd likely apply a unique class name to each subtree and define the styling inside a corresponding selector:

<div class="red-text">
 <p>Make me red!</p>
</div>
<div class="green-text">
 <p>Make me green!</p>
</div>
<div class="blue-text">
 <p>Make me blue!</p>
</div>

<style>
.red-text {
 color: red;
}
.green-text {
 color: green;
}
.blue-text {
 color: blue;
}
</style>

Of course, this is a less than ideal solution. This isn't much different than defining variables in a global namespace; this CSS will be applied to the entire DOM even though you know with certainty that these style definitions are not needed anywhere else. You could keep adding CSS selector specificity to prevent these styles from bleeding elsewhere, but this is little more than a half-measure. Ideally, you'd prefer to restrict CSS to only a portion of the DOM: therein lies the raw utility of the shadow DOM.

Creating a Shadow DOM

For reasons involving security or preventing shadow DOM collisions, not all element types can have a shadow DOM attached to them. Attempting to attach a shadow DOM to an invalid element type, or an element with a shadow DOM already attached, will throw an error.

The following is a list of elements capable of hosting a shadow DOM:

A shadow DOM is created by attaching it to a valid HTML element using the attachShadow() method. The element to which a shadow DOM is attached is referred to as the shadow host. The root node of the shadow DOM is referred to as the shadow root.

The attachShadow() method expects a required shadowRootInit object and returns the instance of the shadow DOM. The shadowRootInit object must contain a single mode property specifying either "open" or "closed". A reference to an open shadow DOM can be obtained on an HTML element via the shadowRoot property; this is not possible with a closed shadow DOM.

The mode difference is demonstrated here:

document.body.innerHTML = `
 <div id="foo"></div>
 <div id="bar"></div>
`;

const foo = document.querySelector('#foo'); 
const bar = document.querySelector('#bar');

const openShadowDOM = foo.attachShadow({ mode: 'open' }); 
const closedShadowDOM = bar.attachShadow({ mode: 'closed' });

console.log(openShadowDOM);     // #shadow-root (open) 
console.log(closedShadowDOM);   // #shadow-root (closed)

console.log(foo.shadowRoot);    // #shadow-root (open)
console.log(bar.shadowRoot);    // null

In general, there is rarely a situation in which creating a closed shadow DOM is necessary. Although it confers the ability to restrict programmatic access to a shadow DOM from the shadow host, there are plenty of ways for malicious code to circumnavigate this and regain access to the shadow DOM. In short, creating a closed shadow DOM should not be used for security purposes.

Using a Shadow DOM

Once attached to an element, a shadow DOM can be used as a normal DOM. Consider the following example, which recreates the red/green/blue example shown previously:

for (let color of ['red', 'green', 'blue']) {
 const div = document.createElement('div');
 const shadowDOM = div.attachShadow({ mode: 'open' });
 
 document.body.appendChild(div);
 
 shadowDOM.innerHTML = `
  <p>Make me ${color}</p>
  
  <style>
  p {
   color: ${color};
  }
  </style>
 `;
}

Although there are three identical selectors applying three different colors, the selectors will only be applied to the shadow DOM in which they are defined. As such, the three <p> elements will appear in three different colors.

You can verify that these elements exist inside their own shadow DOM as follows:

for (let color of ['red', 'green', 'blue']) {
 const div = document.createElement('div');
 const shadowDOM = div.attachShadow({ mode: 'open' });
 
 document.body.appendChild(div);
 
 shadowDOM.innerHTML = `
  <p>Make me ${color}</p>
  
  <style>
  p {
   color: ${color};
  }
  </style>
 `;
}

function countP(node) {
 console.log(node.querySelectorAll('p').length);
}

countP(document); // 0

for (let element of document.querySelectorAll('div')) {
 countP(element.shadowRoot);
}

// 1
// 1
// 1

Browser inspector tools will make it clear where a shadow DOM exists. For example, the preceding example will appear as the following in the browser inspector:

<body>
<div>
 #shadow-root (open)
  <p>Make me red!</p>

  <style>
  p {
    color: red;
  }
  </style>
</div>
<div>
 #shadow-root (open)
  <p>Make me green!</p>

  <style>
  p {
    color: green;
  }
  </style>
</div>
<div>
 #shadow-root (open)
  <p>Make me blue!</p>

  <style>
  p {
    color: blue;
  }
  </style>
</div>
</body>

Shadow DOMs are not an impermeable boundary. An HTML element can be moved between DOM trees without restriction:

document.body.innerHTML = `
<div></div>
<p id="foo">Move me</p>
`;

const divElement = document.querySelector('div');
const pElement = document.querySelector('p');

const shadowDOM = divElement.attachShadow({ mode: 'open' });

// Remove element from parent DOM
divElement.parentElement.removeChild(pElement);

// Add element to shadow DOM
shadowDOM.appendChild(pElement);

// Check to see that element was moved
console.log(shadowDOM.innerHTML); // <p id="foo">Move me</p>

Composition and Shadow DOM Slots

The shadow DOM is designed to enable customizable components, and this requires the ability to handle nested DOM fragments. Conceptually, this is relatively straightforward: HTML inside a shadow host element needs a way to render inside the shadow DOM without actually being part of the shadow DOM tree.

By default, nested content will be hidden. Consider the following example where the text becomes hidden after 1000 milliseconds:

document.body.innerHTML = `
<div>
 <p>Foo</p>
</div>
`;

setTimeout(() => document.querySelector('div').attachShadow({ mode: 'open' }), 1000);

Once the shadow DOM is attached, the browser gives priority to the shadow DOM and will render its contents instead of the text. In this example, the shadow DOM is empty, so the <div> will appear empty in turn.

To show this content, you can use a <slot> tag to instruct where the browser should place the HTML. In the code that follows, the previous example is reworked so the text reappears inside the shadow DOM:

document.body.innerHTML = `
<div id="foo">
 <p>Foo</p>
</div>
`;

document.querySelector('div')
  .attachShadow({ mode: 'open' })
  .innerHTML = `<div id="bar">
                  <slot></slot>
                <div>`

Now, the projected content will behave as if it exists inside the shadow DOM. Inspecting the page reveals that the content appears to actually replace the <slot>:

<body>
<div id="foo">
 #shadow-root (open)
  <div id="bar">
   <p>Foo</p>
  </div>
</div>
</body>

Note that, despite its appearance in the page inspector, this is only a projection of DOM content. The element remains attached to the outer DOM:

document.body.innerHTML = `
<div id="foo">
 <p>Foo</p>
</div>
`;

document.querySelector('div')
  .attachShadow({ mode: 'open' })
  .innerHTML = `
   <div id="bar">
    <slot></slot>
   </div>`

console.log(document.querySelector('p').parentElement);
// <div id="foo"></div>

The red-green-blue example from before can be reworked to use slots as follows:

for (let color of ['red', 'green', 'blue']) {
 const divElement = document.createElement('div');
 divElement.innerText = `Make me ${color}`;
 document.body.appendChild(divElement)
 
 divElement
   .attachShadow({ mode: 'open' })
   .innerHTML = `
   <p><slot></slot></p>
   
   <style>
    p {
      color: ${color};
    }
   </style>
   `;
}

It's also possible to use named slots to perform multiple projections. This is accomplished with matching slot/name attribute pairs. The element identified with a slot="foo" attribute will be projected into the <slot> with name="foo". The following example demonstrates this by switching the rendered order of the shadow host element's children:

 document.body.innerHTML = `
<div>
 <p slot="foo">Foo</p>
 <p slot="bar">Bar</p>
</div>
`;

document.querySelector('div')
  .attachShadow({ mode: 'open' })
  .innerHTML = `
  <slot name="bar"></slot>
  <slot name="foo"></slot>
  `;

// Renders:
// Bar
// Foo

Event Retargeting

If a browser event like click occurs inside a shadow DOM, the browser needs a way for the parent DOM to handle the event. However, the implementation must also respect the shadow DOM boundary. To address this, events that escape a shadow DOM and are handled outside undergo event retargeting. Once escaped, this event will appear to have been thrown by the shadow host itself instead of the true encapsulated element. This behavior is demonstrated here:

// Create element to be shadow host
document.body.innerHTML = `
<div onclick="console.log('Handled outside:', event.target)"></div>
`;

// Attach shadow DOM and insert HTML into it
document.querySelector('div')
 .attachShadow({ mode: 'open' })
 .innerHTML = `
<button onclick="console.log('Handled inside:', event.target)">Foo</button>
`;

// When clicking the button:
// Handled inside: <button onclick="…"></button>
// Handled outside: <div onclick="…"></div>

Note that retargeting only occurs for elements that actually exist inside a shadow DOM. Elements projected from outside using a <slot> tag will not have their events retargeted, as they still technically exist outside the shadow DOM.

Custom Elements

If you've used a JavaScript framework, you're likely familiar with the concept of custom elements, as all major frameworks provide this feature in some form. Custom elements introduce an object-oriented programming flavor to HTML elements. With them, it is possible to create custom, complex, and reusable elements and create instances with a simple HTML tag or attribute.

Defining a Custom Element

Browsers already will attempt to incorporate unrecognized elements into the DOM as generic elements. Of course, by default they won't do anything special that a generic HTML element doesn't already do. Consider the following example, where a nonsense HTML tag becomes an instance of an HTMLElement:

document.body.innerHTML = `
<x-foo >I'm inside a nonsense element.</x-foo >
`;

console.log(document.querySelector('x-foo') instanceof HTMLElement); // true

Custom elements take this further. They allow you to define complex behavior whenever an <x-foo> tag appears, and also to tap into the element's lifecycle with respect to the DOM. Custom element definition is accomplished using the global customElements property, which returns the CustomElementRegistry object.

console.log(customElements); // CustomElementRegistry {}

Defining a custom element is accomplished with the define() method. The following creates a trivial custom element, which inherits from a vanilla HTMLElement:

class FooElement extends HTMLElement {}
customElements.define('x-foo', FooElement);

document.body.innerHTML = `
<x-foo>I'm inside a nonsense element.</x-foo>
`;

console.log(document.querySelector('x-foo') instanceof FooElement); // true

The power of custom elements is housed in the class definition. For example, now each instance of this class in the DOM will call the constructor that you control:

class FooElement extends HTMLElement {
 constructor() {
  super();
  console.log('x-foo')
 }
}
customElements.define('x-foo', FooElement);

document.body.innerHTML = `
<x-foo></x-foo>
<x-foo></x-foo>
<x-foo></x-foo>
`;

// x-foo
// x-foo
// x-foo

If a custom element inherits from an element class, the tag can be specified as an instance of that custom element using the is attribute and the extends option:

class FooElement extends HTMLDivElement {
 constructor() {
  super();
  console.log('x-foo')
 }
}
customElements.define('x-foo', FooElement, { extends: 'div' });

document.body.innerHTML = `
<div is="x-foo"></div>
<div is="x-foo"></div>
<div is="x-foo"></div>
`;

// x-foo
// x-foo
// x-foo

Adding Web Component Content

Because the custom element class constructor is called each time the element is added to the DOM, it is easy to automatically populate a custom element with child DOM content. Although you are forbidden from adding DOM children inside the constructor (a DOMException will be thrown), you can attach a shadow DOM and place content inside:

class FooElement extends HTMLElement {
 constructor() {
  super();
  
  // 'this' refers to the web component node
  this.attachShadow({ mode: 'open' });
  
  this.shadowRoot.innerHTML = `
   <p>I'm inside a custom element!</p>
  `;
 }
}
customElements.define('x-foo', FooElement);

document.body.innerHTML += `<x-foo></x-foo`;

// Resulting DOM:
// <body>
// <x-foo>
//    #shadow-root (open)
//      <p>I'm inside a custom element!</p>
// <x-foo>
// </body>

To avoid the nastiness of string templates and innerHTML, this example can be refactored to use HTML templates and document.createElement():

// (Initial HTML)
// <template id="x-foo-tpl">
//  <p>I'm inside a custom element template!</p>
// </template>

const template = document.querySelector('#x-foo-tpl');

class FooElement extends HTMLElement {
 constructor() {
  super();
  
  this.attachShadow({ mode: 'open' });
  
  this.shadowRoot.appendChild(template.content.cloneNode(true));
 }
}
customElements.define('x-foo', FooElement);

document.body.innerHTML += `<x-foo></x-foo`;

// Resulting DOM:
// <body>
// <template id="x-foo-tpl">
//  <p>I'm inside a custom element template!</p>
// </template>
// <x-foo>
//    #shadow-root (open)
//       <p>I'm inside a custom element template!</p>
// <x-foo>
// </body>

This practice allows for a high degree of HTML and code reuse as well as DOM encapsulation inside your custom element. With it, you are free to build reusable widgets without fear of outside CSS trampling your styling.

Using Custom Element Lifecycle Hooks

It is possible to execute code at various points in the lifecycle of a custom element. Instance methods on the custom element class with the corresponding name will be called during that lifecycle phase. There are five available hooks:

  • constructor() is called when an element instance is created, or an existing DOM element is upgraded to a custom element.
  • connectedCallback() is called each time this instance of the custom element is added into the DOM.
  • disconnectedCallback() is called each time this instance of the custom element is removed from the DOM.
  • attributeChangedCallback() is called each time the value of an observed attribute is changed. When the element instance is instantiated, definition of the initial value counts as a change.
  • adoptedCallback() is called each time this instance is moved to a new document object with document.adoptNode().

The following example demonstrates the construction, connected, and disconnected callbacks:

class FooElement extends HTMLElement {
 constructor() {
  super();
  console.log('ctor');
 }
 
 connectedCallback() {
  console.log('connected');
 }
 
 disconnectedCallback() {
  console.log('disconnected');
 }
}
customElements.define('x-foo', FooElement);

const fooElement = document.createElement('x-foo');
// ctor

document.body.appendChild(fooElement);
// connected

document.body.removeChild(fooElement);
// disconnected

Reflecting Custom Element Attributes

Because an element exists as both a DOM entity and a JavaScript object, a common pattern is to reflect changes between the two. In other words, a change to the DOM should reflect a change in the object, and vice versa. To reflect from the object to the DOM, a common strategy is to use getters and setters. The following example reflects the bar property of the object up to the DOM:

document.body.innerHTML = `<x-foo></x-foo>`;

class FooElement extends HTMLElement {
 constructor() {
  super();
  
  this.bar = true;
 }
 
 get bar() {
  return this.getAttribute('bar');
 }
 
 set bar(value) {
  this.setAttribute('bar', value)
 }
}
customElements.define('x-foo', FooElement);

console.log(document.body.innerHTML);
// <x-foo bar="true"></x-foo>

Reflecting in the reverse direction—from the DOM to the object—requires setting a listener for that attribute. To accomplish this, you can instruct the custom element to call the attributeChangedCallback() each time the attribute's value changes by using the observedAttributes() getter:

class FooElement extends HTMLElement {
 static get observedAttributes() {
  // List attributes which should trigger attributeChangedCallback()
  return ['bar'];
 }
 
 get bar() {
  return this.getAttribute('bar');
 }
 
 set bar(value) {
  this.setAttribute('bar', value)
 }
 
 attributeChangedCallback(name, oldValue, newValue) {
    if (oldValue !== newValue) {
    console.log(`${oldValue} -> ${newValue}`);
   
    this[name] = newValue;
  }
 }
}
customElements.define('x-foo', FooElement);

document.body.innerHTML = `<x-foo bar="false"></x-foo>`;
// null -> false

document.querySelector('x-foo').setAttribute('bar', true);
// false -> true

Upgrading Custom Elements

It's not always possible to define a custom element before the custom element tag appears in the DOM. Web components address this ordering problem by exposing several additional methods on the CustomElementRegistry that allow you to detect when a custom element is eventually defined and upgrade existing elements.

The CustomElementRegistry.get() method returns the custom element class if it was already defined. In a similar vein, the CustomElementRegistry.whenDefined() method returns a promise that resolves when a custom element becomes defined:

customElements.whenDefined('x-foo').then(() => console.log('defined!'));

console.log(customElements.get('x-foo'));
// undefined

customElements.define('x-foo', class {});
// defined!

console.log(customElements.get('x-foo'));
// class FooElement {}

Elements connected to the DOM will be automatically upgraded when the custom element is defined. If you wish to forcibly upgrade an element before it is connected to the DOM, this can be accomplished with CustomElementRegistry.upgrade():

// Create HTMLUnknownElement object before custom element definition
const fooElement = document.createElement('x-foo');

// Define custom element
class FooElement extends HTMLElement {}
customElements.define('x-foo', FooElement);

console.log(fooElement instanceof FooElement); // false

// Force the upgrade
customElements.upgrade(fooElement);

console.log(fooElement instanceof FooElement); // true

THE WEB CRYPTOGRAPHY API

The Web Cryptography API (https://www.w3.org/TR/WebCryptoAPI) describes a suite of cryptography tools that standardize how JavaScript can wield cryptographic behavior in a secure and idiomatic fashion. These tools include the ability to generate, use, and apply cryptographic key pairs; encrypt and decrypt messages; and robustly generate random numbers.

Random Number Generation

When tasked with generating random values, most developers reach for Math.random(). This method is implemented in browsers as a pseudorandom number generator (PRNG). The "pseudo" designation stems from the nature of the value generation in that it is not truly random. Values emitted from a PRNG only emulate properties that are associated with randomness. This appearance of randomness is made possible through some clever engineering. The browser's PRNG doesn't utilize any true sources of randomness—it is purely a fixed algorithm applied to a hermetic internal state. Each time Math.random() is called, the internal state is mutated by an algorithm and the result is converted into a new random value. For example, the V8 engine uses an algorithm called xorshift128+ to perform this mutation.

Because this algorithm is fixed and its input is only the previous state, the order of random numbers is deterministic. xorshift128+ uses 128 bits of internal state, and the algorithm is designed such that any initial state will produce a sequence of 2128–1 pseudorandom values before repeating itself. This looping behavior is called a permutation cycle, and the length of this cycle is referred to as the period. The implications of this are clear: If an attacker knows the internal state of the PRNG, they are able to predict the pseudorandom values that it will subsequently emit. If an unwitting developer were to use the PRNG to generate a private key for the purposes of encryption, the attacker could use the properties of the PRNG to determine the private key.

Pseudorandom number generators are designed to be able to quickly calculate values that seem to be random. They are, however, unsuitable for the purposes of cryptographic computation. To address this, a cryptographically secure pseudorandom number generator (CSPRNG) additionally incorporates a source of entropy as its input, such as measuring hardware timings or other system properties that exhibit unpredictable behavior. Doing so is much slower than a regular PRNG, but values emitted by a CSPRNG are sufficiently unpredictable for cryptographic purposes.

The Web Cryptography API introduces a CSPRNG that can be accessed on the global Crypto object via crypto.getRandomValues(). Unlike Math.random(), which returns a floating point number between 0 and 1, getRandomValues() writes random numbers into the typed array provided as a parameter. The typed array class is not important, as the underlying buffer is being filled with random binary bits.

The following generates five 8-bit random values:

const array = new Uint8Array(1);

for (let i=0; i<5; ++i) {
 console.log(crypto.getRandomValues(array));
}

// Uint8Array [41]
// Uint8Array [250]
// Uint8Array [51]
// Uint8Array [129]
// Uint8Array [35]

getRandomValues() will generate up to 216 bytes; above that it will throw an error:

const fooArray = new Uint8Array(2 ** 16);
console.log(window.crypto.getRandomValues(fooArray)); // Uint32Array(16384) […]

const barArray = new Uint8Array((2 ** 16) + 1);
console.log(window.crypto.getRandomValues(barArray)); // Error

Reimplementing Math.random() using the CSPRNG could be accomplished by generating a single random 32-bit number and dividing this into the maximum possible value, 0xFFFFFFFF. This yields a value between 0 and 1:

function randomFloat() {
 // Generate 32 random bits
 const fooArray = new Uint32Array(1);

 // Maximum value is 2^32 - 1
 const maxUint32 = 0xFFFFFFFF;
 
 // Divide by maximum possible value
 return crypto.getRandomValues(fooArray)[0] / maxUint32;
}

console.log(randomFloat()); // 0.5033651619458955

Using the SubtleCrypto Object

The overwhelming majority of the Web Cryptography API resides inside the SubtleCrypto object, accessible via window.crypto.subtle:

console.log(crypto.subtle); // SubtleCrypto {}

This object contains a collection of methods for performing common cryptographic functions such as encryption, hashing, signing, and key generation. Because all cryptographic operations are performed on raw binary data, every SubtleCrypto method deals in ArrayBuffer and ArrayBufferView types. Because strings are so frequently the subject of cryptographic operations, the TextEncoder and TextDecoder classes will often be used alongside SubtleCrypto to convert to and from strings.

Generating Cryptographic Digests

One extremely common cryptography operation is to calculate a cryptographic digest of data. The specification supports four algorithms for this, SHA-1 and three flavors of SHA-2:

  • Secure Hash Algorithm 1 (SHA-1)—A hash function with an architecture similar to MD5. It takes an input of any size and generates a 160-bit message digest. This algorithm is no longer considered secure as it is vulnerable to collision attacks.
  • Secure Hash Algorithm 2 (SHA-2)—A family of hash functions all built upon the same collision-resistant one-way compression function. The specification supports three members of this family: SHA-256, SHA-384, and SHA-512. The size of the generated message digest can be 256 bits (SHA-256), 384 bits (SHA-384), or 512 bits (SHA-512). This algorithm is considered secure and widely used in many applications and protocols, including TLS, PGP, and cryptocurrencies like Bitcoin.

The SubtleCrypto.digest() method is used to generate a message digest. The hash algorithm is specified using a string: SHA-1, SHA-256, SHA-384, or SHA-512. The following example demonstrates a simple application of SHA-256 to generate a message digest of the string foo:

(async function() {
 const textEncoder = new TextEncoder();
 const message = textEncoder.encode('foo');
 const messageDigest = await crypto.subtle.digest('SHA-256', message);

 console.log(new Uint32Array(messageDigest));
})();
 
// Uint32Array(8) [1806968364, 2412183400, 1011194873, 876687389, 
//                 1882014227, 2696905572, 2287897337, 2934400610]

Commonly, the message digest binary will be used in a hex string format. Converting an array buffer to this format is accomplished by splitting the buffer up into 8-bit pieces and converting using toString() in base 16:

(async function() {
 const textEncoder = new TextEncoder();
 const message = textEncoder.encode('foo');
 const messageDigest = await crypto.subtle.digest('SHA-256', message);

 const hexDigest = Array.from(new Uint8Array(messageDigest))
  .map((x) => x.toString(16).padStart(2, '0'))
  .join('');

 console.log(hexDigest);
})();

// 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

Software companies usually publish a digest of their install binaries so that people who wish to safely install their software can verify that the binary they download is the version that the company actually published (and not one with injected malware). The following example downloads v67.0 of Firefox, hashes it with SHA-512, downloads the SHA-512 binary verification, and checks that the two hex strings match:

(async function() {
 const mozillaCdnUrl = '//download-origin.cdn.mozilla.net/pub/firefox/releases/67.0/';
 const firefoxBinaryFilename = 'linux-x86_64/en-US/firefox-67.0.tar.bz2';
 const firefoxShaFilename = 'SHA512SUMS';
 
 console.log('Fetching Firefox binary…');
 const fileArrayBuffer = await (await fetch(mozillaCdnUrl + firefoxBinaryFilename))
   .arrayBuffer();
 
 console.log('Calculating Firefox digest…');
 const firefoxBinaryDigest = await crypto.subtle.digest('SHA-512', fileArrayBuffer);
 
 const firefoxHexDigest = Array.from(new Uint8Array(firefoxBinaryDigest))
  .map((x) => x.toString(16).padStart(2, '0'))
  .join('');
 
 console.log('Fetching published binary digests…');
 // The SHA file contains digests of every firefox binary in this release,
 // so there is some organization performed.
 const shaPairs = (await (await fetch(mozillaCdnUrl + firefoxShaFilename)).text())
   .split(/
/).map((x) => x.split(/s+/));
 
 let verified = false;
 
 console.log('Checking calculated digest against published digests…');
 for (const [sha, filename] of shaPairs) {
  if (filename === firefoxBinaryFilename) {
   if (sha === firefoxHexDigest) {
    verified = true;
    break;
   }
  }
 }
 
 console.log('Verified:', verified);
})();

// Fetching Firefox binary…
// Calculating Firefox digest…
// Fetching published binary digests…
// Checking calculated digest against published digests…
// Verified: true

CryptoKeys and Algorithms

Cryptography would be meaningless without secret keys, and the SubtleCrypto object uses instances of the CryptoKey class to house these secrets. The CryptoKey class supports multiple types of encryption algorithms and allows for control over key extraction and usage.

The CryptoKey class supports the following algorithms, categorized by their parent cryptosystem:

  • RSA (Rivest-Shamir-Adleman)—A public-key cryptosystem in which two large prime numbers are used to derive a pair of public and private keys that can be used to sign/verify or encrypt/decrypt messages. The trapdoor function for RSA is called the factoring problem.
  • RSASSA-PKCS1-v1_5—An application of RSA used to sign messages with the private key and allow that signature to be verified with the public key.
    • SSA stands for signature schemes with appendix, indicating the algorithm supports signature generation and verification operations.
    • PKCS1 stands for Public-Key Cryptography Standards #1, indicating the algorithm exhibits mathematical properties that RSA keys must have.
    • RSASSA-PKCS1-v1_5 is deterministic, meaning the same message and key will produce an identical signature each time it is performed.
  • RSA-PSS—Another application of RSA used to sign and verify messages.
    • PSS stands for probabilistic signature scheme, indicating that the signature generation incorporates a salt to randomize the signature.
    • Unlike RSASSA-PKCS1-v1_5, the same message and key will produce a different signature each time it is performed.
    • Unlike RSASSA-PKCS1-v1_5, RSA-PSS is provably reducible to the hardness of the RSA factoring problem.
    • In general, although RSASSA-PKCS1-v1_5 is still considered secure, RSA-PSS should be used as a replacement for RSASSA-PKCS1-v1_5.
  • RSA-OAEP—An application of RSA used to encrypt messages with the public key and decrypt them with the private key.
    • OAEP stands for Optimal Asymmetric Encryption Padding, indicating that the algorithm utilizes a Feistel network to process the unencrypted message prior to encryption.
    • OAEP serves to convert the deterministic RSA encryption scheme to a probabilistic encryption scheme.
  • ECC (Elliptic-Curve Cryptography)—A public-key cryptosystem in which a prime number and an elliptic curve are used to derive a pair of public and private keys that can be used to sign/verify messages. The trapdoor function for ECC is called the elliptic curve discrete logarithm problem. ECC is considered to be superior to RSA: Although both RSA and ECC are cryptographically strong, ECC keys are shorter than RSA keys and ECC cryptographic operations are faster than RSA operations.
  • ECDSA (Elliptic Curve Digital Signature Algorithm)—An application of ECC used to sign and verify messages. This algorithm is an elliptic curve–flavored variant of the Digital Signature Algorithm (DSA).
  • ECDH (Elliptic Curve Diffie-Hellman)—A key-generation and key-agreement application of ECC that allows for two parties to establish a shared secret over a public communication channel. This algorithm is an elliptic curve-flavored variant of the Diffie-Hellman key exchange (DH) protocol.
  • AES (Advanced Encryption Standard)—A symmetric-key cryptosystem that encrypts/decrypts data using a block cipher derived from a substitution-permutation network. AES is used in different modes, which change the algorithm's characteristics.
  • AES-CTR—The counter mode of AES. This mode behaves as a stream cipher by using an incremented counter to generate its keystream. It must also be provided a nonce, which is effectively used as an initialization vector. AES-CTR encryption/decryption is able to be parallelized.
  • AES-CBC—The cipher block chaining mode of AES. Before encrypting each block of plaintext, it is XORed with the previous block of ciphertext—hence the "chaining" name. An initialization vector is used as the XOR input for the first block.
  • AES-GCM—The Galois/Counter mode of AES. This mode uses a counter and initialization vector to generate a value, which is XORed with the plaintext of each block. Unlike CBC, the XOR inputs do not have dependencies on the previous block's encryption and therefore the GCM mode can be parallelized. Because of its excellent performance characteristics, AES-GCM enjoys utilization in many networking security protocols.
  • AES-KW—The key wrapping mode of AES. This algorithm wraps a secret key into a portable and encrypted format that is safe for transmission on an untrusted channel. Once transmitted, the receiving party can then unwrap the key. Unlike other AES modes, AES-KW does not require an initialization vector.
  • HMAC (Hash-Based Message Authentication Code)—An algorithm that generates message authentication codes used to verify that a message arrives unaltered when sent over an untrusted network. Two parties use a hash function and a shared private key to sign and verify messages.
  • KDF (Key Derivation Functions)—Algorithms that can derive one or many keys from a master key using a hash function. KDFs are capable of generating keys of a different length or converting keys to different formats.
  • HKDF (HMAC-Based Key Derivation Function)—A key derivation function designed to be used with a high-entropy input such as an existing key.
  • PBKDF2 (Password-Based Key Derivation Function 2)—A key derivation function designed to be used with a low-entropy input such as a password string.

Generating CryptoKeys

Generating a random CryptoKey is accomplished with the SubtleCrypto.generateKey() method, which returns a promise that resolves to one or many CryptoKey instances. This method is passed a params object specifying the target algorithm, a boolean indicating whether or not the key should be extractable from the CryptoKey object, and an array of strings—keyUsages—indicating which SubtleCrypto methods the key can be used with.

Because different cryptosystems require different inputs to generate keys, the params object provides the required inputs for each cryptosystem:

  • The RSA cryptosystem uses a RsaHashedKeyGenParams object.
  • The ECC cryptosystem uses an EcKeyGenParams object.
  • The HMAC cryptosystem uses an HmacKeyGenParams object.
  • The AES cryptosystem uses an AesKeyGenParams object.

The keyUsages object describes which algorithms the key can be used with. It expects at least one of the following strings:

  • encrypt
  • decrypt
  • sign
  • verify
  • deriveKey
  • deriveBits
  • wrapKey
  • unwrapKey

Suppose you want to generate a symmetric key with the following properties:

  • Supports the AES-CTR algorithm
  • Key length of 128 bits
  • Cannot be extracted from the CryptoKey object
  • Is able to be used with the encrypt() and decrypt() methods

Generating this key can be accomplished in the following fashion:

(async function() {
 const params = {
  name: 'AES-CTR', 
  length: 128
 };

 const keyUsages = ['encrypt', 'decrypt'];

 const key = await crypto.subtle.generateKey(params, false, keyUsages);

 console.log(key);
 // CryptoKey {type: "secret", extractable: true, algorithm: {…}, usages: Array(2)}
})();

Suppose you want to generate an asymmetric key pair with the following properties:

  • Supports the ECDSA algorithm
  • Uses the P-256 elliptic curve
  • Can be extracted from the CryptoKey object
  • Is able to be used with the sign() and verify() methods

Generating this key pair can be accomplished in the following fashion:

(async function() {
 const params = {
  name: 'ECDSA', 
  namedCurve: 'P-256'
 };

 const keyUsages = ['sign', 'verify'];

 const {publicKey, privateKey} = await crypto.subtle.generateKey(params, true, 
   keyUsages);

 console.log(publicKey);
 // CryptoKey {type: "public", extractable: true, algorithm: {…}, usages: Array(1)}

 console.log(privateKey);
 // CryptoKey {type: "private", extractable: true, algorithm: {…}, usages: Array(1)}
})();

Exporting and Importing Keys

If a key is extractable, it is possible to expose the raw key binary from inside the CryptoKey object. The exportKey() method allows you to do so while also specifying the target format (raw, pkcs8, spki, or jwk). The method returns a promise that resolves to an ArrayBuffer containing the key:

(async function() {
 const params = {
  name: 'AES-CTR', 
  length: 128
 };
 const keyUsages = ['encrypt', 'decrypt'];

 const key = await crypto.subtle.generateKey(params, true, keyUsages);

 const rawKey = await crypto.subtle.exportKey('raw', key);

 console.log(new Uint8Array(rawKey)); 
 // Uint8Array[93, 122, 66, 135, 144, 182, 119, 196, 234, 73, 84, 7, 139, 43, 238, // 110]
})();

The inverse operation of exportKey() is importKey(). This method's signature is essentially a combination of generateKey() and exportKey(). The following method generates a key, exports it, and imports it once again:

(async function() {
 const params = {
  name: 'AES-CTR', 
  length: 128
 };
 const keyUsages = ['encrypt', 'decrypt'];
 const keyFormat = 'raw';
 const isExtractable = true;

 const key = await crypto.subtle.generateKey(params, isExtractable, keyUsages);

 const rawKey = await crypto.subtle.exportKey(keyFormat, key);

 const importedKey = await crypto.subtle.importKey(keyFormat, rawKey, params.name, 
   isExtractable, keyUsages);

 console.log(importedKey);
 // CryptoKey {type: "secret", extractable: true, algorithm: {…}, usages: Array(2)}
})();

Deriving Keys from Master Keys

The SubtleCrypto object allows you to derive new keys with configurable properties from an existing secret. It supports a deriveKey() method that returns a promise resolving to a CryptoKey, and a deriveBits() method that returns a promise resolving to an ArrayBuffer.

The deriveBits() function accepts an algorithm params object, the master key, and the length in bits of the output. This can be used in situations where two people, each with their own key pairs, wish to obtain a shared secret key. The following example uses the ECDH algorithm to generate reciprocal keys from two keypairs and ensures that they derive the same key bits:

(async function() {
 const ellipticCurve = 'P-256';
 const algoIdentifier = 'ECDH';
 const derivedKeySize = 128;

 const params = {
  name: algoIdentifier, 
  namedCurve: ellipticCurve
 };

 const keyUsages = ['deriveBits'];

 const keyPairA = await crypto.subtle.generateKey(params, true, keyUsages);
 const keyPairB = await crypto.subtle.generateKey(params, true, keyUsages);

 // Derive key bits from A's public key and B's private key
 const derivedBitsAB = await crypto.subtle.deriveBits(
   Object.assign({ public: keyPairA.publicKey }, params), 
   keyPairB.privateKey, 
   derivedKeySize);
 
 // Derive key bits from B's public key and A's private key
 const derivedBitsBA = await crypto.subtle.deriveBits(
   Object.assign({ public: keyPairB.publicKey }, params), 
   keyPairA.privateKey, 
   derivedKeySize);
 
 const arrayAB = new Uint32Array(derivedBitsAB);
 const arrayBA = new Uint32Array(derivedBitsBA);
 
 // Ensure key arrays are identical
 console.log(
   arrayAB.length === arrayBA.length && 
   arrayAB.every((val, i) => val === arrayBA[i])); // true
})();

The deriveKey() method behaves similarly, returning an instance of a CryptoKey instead of an ArrayBuffer. The following example takes a raw string, applies the PBKDF2 algorithm to import it into a raw master key, and derives a new key in AES-GCM format:

(async function() {
 const password = 'foobar';
 const salt = crypto.getRandomValues(new Uint8Array(16));
 const algoIdentifier = 'PBKDF2';
 const keyFormat = 'raw';
 const isExtractable = false;
 
 const params = {
  name: algoIdentifier
 };
 
 const masterKey = await window.crypto.subtle.importKey(
  keyFormat,
  (new TextEncoder()).encode(password),
  params,
  isExtractable,
  ['deriveKey']
 );
 
 const deriveParams = {
  name: 'AES-GCM',
  length: 128
 };
 
 const derivedKey = await window.crypto.subtle.deriveKey(
  Object.assign({salt, iterations: 1E5, hash: 'SHA-256'}, params),
  masterKey,
  deriveParams,
  isExtractable,
  ['encrypt']
 );

 console.log(derivedKey);
 // CryptoKey {type: "secret", extractable: false, algorithm: {…}, usages: Array(1)}
})();

Signing and Verifying Messages with Asymmetric Keys

The SubtleCrypto object allows you to use public-key algorithms to generate signatures using a private key or to verify signatures using a public key. These are performed using the SubtleCrypto.sign() and SubtleCrypto.verify() methods, respectively.

Signing a message requires a params object to specify the algorithm and any necessary values, the private CryptoKey, and the ArrayBuffer or ArrayBufferView to be signed. The following example generates an elliptic curve key pair and uses the private key to sign a message:

(async function() {
 const keyParams = {
  name: 'ECDSA', 
  namedCurve: 'P-256'
 };

 const keyUsages = ['sign', 'verify'];

 const {publicKey, privateKey} = await crypto.subtle.generateKey(keyParams, true, 
   keyUsages);

 const message = (new TextEncoder()).encode('I am Satoshi Nakamoto');
 
 const signParams = {
  name: 'ECDSA',
  hash: 'SHA-256'
 };

 const signature = await crypto.subtle.sign(signParams, privateKey, message);

 console.log(new Uint32Array(signature)); 
 // Uint32Array(16) [2202267297, 698413658, 1501924384, 691450316, 778757775, … ]
})();

An individual wishing to verify this message against the signature could use the public key and the SubtleCrypto.verify() method. This method's signature is nearly identical to sign() with the exception that it must be provided the public key as well as the signature. The following example extends the previous example by verifying the generated signature:

(async function() {
 const keyParams = {
  name: 'ECDSA', 
  namedCurve: 'P-256'
 };

 const keyUsages = ['sign', 'verify'];

 const {publicKey, privateKey} = await crypto.subtle.generateKey(keyParams, true, 
   keyUsages);

 const message = (new TextEncoder()).encode('I am Satoshi Nakamoto');
 
 const signParams = {
  name: 'ECDSA',
  hash: 'SHA-256'
 };

 const signature = await crypto.subtle.sign(signParams, privateKey, message);

 const verified = await crypto.subtle.verify(signParams, publicKey, signature, 
   message);
 
 console.log(verified); // true
})();

Encrypting and Decrypting with Symmetric Keys

The SubtleCrypto object allows you to use both public-key and symmetric algorithms to encrypt and decrypt messages. These are performed using the SubtleCrypto.encrypt() and SubtleCrypto.decrypt() methods, respectively.

Encrypting a message requires a params object to specify the algorithm and any necessary values, the encryption key, and the data to be encrypted. The following example generates a symmetric AES-CBC key, encrypts it, and finally decrypts a message:

(async function() {
 const algoIdentifier = 'AES-CBC';

 const keyParams = {
  name: algoIdentifier, 
  length: 256
 };

 const keyUsages = ['encrypt', 'decrypt'];

 const key = await crypto.subtle.generateKey(keyParams, true, 
   keyUsages);

 const originalPlaintext = (new TextEncoder()).encode('I am Satoshi Nakamoto');

 const encryptDecryptParams = {
  name: algoIdentifier, 
  iv: crypto.getRandomValues(new Uint8Array(16))
 };
 
 const ciphertext = await crypto.subtle.encrypt(encryptDecryptParams, key, 
   originalPlaintext);

 console.log(ciphertext);
 // ArrayBuffer(32) {}

 const decryptedPlaintext = await crypto.subtle.decrypt(encryptDecryptParams, key, 
   ciphertext);
 
 console.log((new TextDecoder()).decode(decryptedPlaintext));
 // I am Satoshi Nakamoto
})();

Wrapping and Unwrapping a Key

The SubtleCrypto object allows you to wrap and unwrap keys to allow for transmission over an untrusted channel. These are performed using the SubtleCrypto.wrapKey() and SubtleCrypto.unwrapKey() methods, respectively.

Wrapping a key requires a format string, the CryptoKey instance to be wrapped, the CryptoKey to perform the wrapping, and a params object to specify the algorithm and any necessary values. The following example generates a symmetric AES-GCM key, wraps the key with AES-KW, and finally unwraps the key:

(async function() {
 const keyFormat = 'raw';
 const extractable = true;

 const wrappingKeyAlgoIdentifier = 'AES-KW';
 const wrappingKeyUsages = ['wrapKey', 'unwrapKey'];
 const wrappingKeyParams = {
  name: wrappingKeyAlgoIdentifier, 
  length: 256
 };
 
 const keyAlgoIdentifier = 'AES-GCM';
 const keyUsages = ['encrypt'];
 const keyParams = {
  name: keyAlgoIdentifier, 
  length: 256
 };

 const wrappingKey = await crypto.subtle.generateKey(wrappingKeyParams, extractable, 
   wrappingKeyUsages);
   
 console.log(wrappingKey);
 // CryptoKey {type: "secret", extractable: true, algorithm: {…}, usages: Array(2)}
 
 const key = await crypto.subtle.generateKey(keyParams, extractable, keyUsages);
 
 console.log(key);
 // CryptoKey {type: "secret", extractable: true, algorithm: {…}, usages: Array(1)}
 
 const wrappedKey = await crypto.subtle.wrapKey(keyFormat, key, wrappingKey, 
   wrappingKeyAlgoIdentifier);
 
 console.log(wrappedKey);
 // ArrayBuffer(40) {}
 
 const unwrappedKey = await crypto.subtle.unwrapKey(keyFormat, wrappedKey, 
   wrappingKey, wrappingKeyParams, keyParams, extractable, keyUsages);
 
 console.log(unwrappedKey);
 // CryptoKey {type: "secret", extractable: true, algorithm: {…}, usages: Array(1)}
})()

SUMMARY

HTML5, in addition to defining new markup rules, also defines several JavaScript APIs. These APIs are designed to enable better web interfaces that can rival the capabilities of desktop applications. The APIs covered in this chapter are as follows:

  • The Atomics API allows you to protect your code from race conditions resulting from multithreaded memory access patterns.
  • The postMessage() API provides the ability to send messages across documents from different origins while keeping the security of the same-origin policy intact.
  • The Encoding API enables you to seamlessly convert between strings and buffers—an increasingly common pattern.
  • The File API affords you robust tools for sending, receiving, and reading large binary objects.
  • The media elements <audio> and <video> have their own APIs for interacting with the audio and video. Not all media formats are supported by all browsers, so make use of the canPlayType() method to properly detect browser support.
  • The Drag-and-Drop API allows you to easily indicate that an element is draggable and responds as the operating system does to drops. You can create custom draggable elements and drop targets.
  • The Notifications API gives you a browser-independent way of presenting interactive tiles to the user.
  • The Streams API affords an entirely new way of incrementally reading, writing, and processing data.
  • The Timing APIs provide a robust way of measuring latency in and around the browser.
  • The Web Components API introduces a gigantic leap forward for element reusability and encapsulation.
  • The Web Cryptography API makes cryptographic operations such as random number generation, encrypting, and signing messages first-class citizens.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset