Beginning with ECMAScript 2015, the TC-39 committee moved to begin releasing a new ECMA specification each year. Doing so allows them to collect all individual proposals that are at a sufficiently advanced stage and package them as a single bundle. This packaging is of limited importance, however, as browser vendors tend to adopt proposals in a piecemeal fashion. When a proposal has reached stage 4, its behavior will not change, it is likely to be included in the next ECMAScript release, and browsers will begin adopting the proposal's features at their discretion.
The ECMAScript 2018 proposal was finalized in January 2018 and features enhancements for asynchronous iteration, the rest and spread operators, regular expressions, and promises. TC-39 maintains a GitHub repository (https://github.com/tc39/ecma262
) that you can use to track the status of various proposals:
TODO FIX ASYNC ITERATION
TODO ES2019 http://exploringjs.com/es2018-es2019/toc.html
Asynchronous execution and the iterator protocol are two extremely pervasive themes in new ECMAScript features over recent releases. Asynchronous execution involves releasing control of the execution thread to allow slow operations to finish before regaining control, and the iterator protocol involves defining a canonical ordering for arbitrary objects. Asynchronous iteration is merely the logical assimilation of these two concepts.
A synchronous iterator provides you with a { value, done }
pair each time next()
is invoked. Of course, this requires that the computation and resource fetching needed to determine the contents of this pair be completed by the time the next()
invocation exits—otherwise, these values are indeterminate. When using a synchronous iterator to iterate over values that are determined asynchronously, the main execution thread will be blocked while waiting for the asynchronous operation to complete.
With asynchronous iterators, this problem is totally solved. An asynchronous iterator provides you with a promise that resolves to a { value, done }
pair each time next()
is invoked. This way, the thread of execution can be released and perform work elsewhere while the current loop iteration is being resolved.
Async iterators are best understood through comparison with a traditional synchronous iterator. The following is a simple Emitter class, which contains a synchronous generator function that produces a synchronous iterator that will count from 0 to 4:
class Emitter {
constructor(max) {
this.max = max;
this.syncIdx = 0;
}
*[Symbol.iterator]() {
while(this.syncIdx < this.max) {
yield this.syncIdx++;
}
}
}
const emitter = new Emitter(5);
function syncCount() {
const syncCounter = emitter[Symbol.iterator]();
for (const x of syncCounter) {
console.log(x);
}
}
syncCount();
// 0
// 1
// 2
// 3
// 4
The previous example only works because in each iteration, the next value is immediately able to be yielded. If instead you did not want to block the main thread of execution while determining the next value to yield, you can also define an asynchronous generator function that will yield Promise-wrapped values.
This can be accomplished using async-flavored versions of iterators and generators. ECMAScript 2018 defines Symbol.asyncIterator
, which allows you to define and invoke Promise-yielding generator functions. The specification also introduces an asynchronous for loop iterator, the for-await-of
loop, intended to consume this async iterator. Using these, the previous example can be extended to support both synchronous and asynchronous iteration:
class Emitter {
constructor(max) {
this.max = max;
this.syncIdx = 0;
this.asyncIdx = 0;
}
*[Symbol.iterator]() {
while(this.syncIdx < this.max) {
yield this.syncIdx++;
}
} async *[Symbol.asyncIterator]() {
*[Symbol.asyncIterator]() {
while(this.asyncIdx < this.max) {
yield new Promise((resolve) => resolve(this.asyncIdx++;
}
}
}
const emitter = new Emitter(5);
function syncCount() {
const syncCounter = emitter[Symbol.iterator]();
for (const x of syncCounter) {
console.log(x);
}
}
async function asyncCount() {
const asyncCounter = emitter[Symbol.asyncIterator]();
for await(const x of asyncCounter) {
console.log(x);
}
}
syncCount();
// 0
// 1
// 2
// 3
// 4
asyncCount();
// 0
// 1
// 2
// 3
// 4
To further your understanding, swap out the above example so that the synchronous generator is passed to a for-await-of loop:
const emitter = new Emitter(5);
async function asyncIteratorSyncCount() {
const syncCounter = emitter[Symbol.iterator]();
for await(const x of syncCounter) {
console.log(x);
}
}
asyncIteratorSyncCount();
// 0
// 1
// AsyncIteratorExample01.js
2
// 3
// 4
Even though the sync counter iterates through primitive values, the for-await-of loop will handle the values as if they were returned wrapped in promises. This demonstrates the power of the for-await-of loop, which allows it to fluently handle both synchronous and asynchronous iterables. This is not true for a normal for loop, which cannot handle an asynchronous iterator:
function syncIteratorAsyncCount() {
const asyncCounter = emitter[Symbol.asyncIterator]();
for (const x of asyncCounter) {
console.log(x);
}
}
syncIteratorAsyncCount();
// TypeError: asyncCounter is not iterable
One of the most important concepts to understand about async iterators is that the Symbol.asyncIterator designation doesn't alter the behavior of the generator function or how the generator is consumed. Note that the generator function is defined as an async function, and designated as a generator using an asterisk. Symbol.asyncIterator merely suggests to an external construct such as a for-await-of loop that the associated iterator will return a sequence promise objects.
Of course, the previous example is quite contrived, as the promises returned from the iterator are instantaneously resolved, and therefore it is little more than a thinly wrapped synchronous iterator. Suppose instead that the yielded promises resolved after an indeterminate period of time; what's more, suppose they return out of order. An asynchronous iterator should emulate a synchronous iterator in every way possible, including in-order execution of code associated with each iteration. To address this, asynchronous iterators maintain a queue of callbacks to ensure that the iterator handler for an earlier value will always complete execution before proceeding to a later value, even if the later value resolves before the earlier value.
To prove this, the async iterator in the following example returns promises which resolve after a random period of time. The async iteration queue ensures that the promise resolution order does not interfere with the order of iteration. As a result, the integers will be printed in order (at random intervals):
class Emitter {
constructor(max) {
this.max = max;
this.syncIdx = 0;
this.asyncIdx = 0;
}
*[Symbol.iterator]() {
while(this.syncIdx < this.max) {
yield this.syncIdx++;
}
}
async *[Symbol.asyncIterator]() {
while(this.asyncIdx < this.max) {
yield new Promise((resolve) => {
setTimeout(() => {
resolve(this.asyncIdx++);
}, Math.floor(Math.random() * 1000));
});
}
}
}
const emitter = new Emitter(5);
function syncCount() {
const syncCounter = emitter[Symbol.iterator]();
for (const x of syncCounter) {
console.log(x);
}
}
async function asyncCount() {
const asyncCounter = emitter[Symbol.asyncIterator]();
for await(const x of asyncCounter) {
console.log(x);
}
}
syncCount();
// 0
// 1
// 2
// 3
// 4
asyncCount();
// 0
// 1
// 2
// 3
// 4
AsyncIteratorExample02.js
Because the composition of asynchronous iterators consists of promises, one must consider the possibility that one of the promises produced by the iterator will reject. Because the design of asynchronous iteration insists on in-order completion, it would not make sense to proceed past a rejected promise through the loop; therefore, a rejected promise will force the iterator to exit:
class Emitter {
constructor(max) {
this.max = max;
this.asyncIdx = 0;
}
async *[Symbol.asyncIterator]() {
while (this.asyncIdx < this.max) {
if (this.asyncIdx < 3) {
yield this.asyncIdx++;
} else {
throw 'Exited loop';
}
}
}
}
const emitter = new Emitter(5);
async function asyncCount() {
const asyncCounter = emitter[Symbol.asyncIterator]();
for await (const x of asyncCounter) {
console.log(x);
}
}
asyncCount();
// 0
// 1
// 2
// Uncaught (in promise) Exited loop
The for-await-of
loop offers two useful features: it makes use of the async iterator queue to ensure in-order execution, and it hides the promise structure of the async iterator. However, using such a loop conceals much of the underlying behavior.
Because the async iterator still follows the iterator protocol, you can just as easily progress through the async iterable using next()
. As described earlier, next()
contains a promise that will resolve to { value, done }
. This means that you must use the Promise API to retrieve methods, but it also means that you are not forced to use the async iterator queue.
const emitter = new Emitter(5);
const asyncCounter = emitter[Symbol.asyncIterator]();
console.log(asyncCounter.next());
// { value: Promise, done: false }
As a rule, async behavior—including for-await-of
loops—cannot exist outside of an async function. However, you may find it necessary on occasion to make use of async behavior in such a context. This can be accomplished by creating an async IIFE:
class Emitter {
constructor(max) {
this.max = max;
this.asyncIdx = 0;
}
async *[Symbol.asyncIterator]() {
while(this.asyncIdx < this.max) {
yield new Promise((resolve) => resolve(this.asyncIdx++));
}
}
}
const emitter = new Emitter(5);
(async function() {
const asyncCounter = emitter[Symbol.asyncIterator]();
for await(const x of asyncCounter) {
console.log(x);
}
})();
// 0
// 1
// 2
// 3
// 4
Because asynchronous iterators will patiently wait for the next iteration without incurring computational cost, an entirely new avenue for implementing an observable interface opens up. At a high level, this will take the form of capturing events, wrapping them in promises, and then feeding these events through the iterator to allow for the listener to hook into the asynchronous iterator. When an event is fired, the next promise in the async iterator will resolve with that event.
A simplistic example of this would be to capture an observable stream of browser events. This requires a queue of promises, each of which corresponds to a single event. The queue will also preserve the order in which events are generated, which is a desirable feature for this sort of problem.
class Observable {
constructor() {
this.promiseQueue = [];
// Holds the resolver for the next promise in the queue
this.resolve = null;
// Pushes the initial promise on the queue which will
// resolve with the first observed event
this.enqueue();
}
// Create a new promise, save its resolve method, and
// store it on the queue
enqueue() {
this.promiseQueue.push(
new Promise((resolve) => this.resolve = resolve));
}
// Remove the promise at the front of the queue and
// return it
dequeue() {
return this.promiseQueue.shift();
}
}
To make use of this promise queue, define an asynchronous generator method on this class. This generator should work for any event type:
class Observable {
constructor() {
this.promiseQueue = [];
// Holds the resolver for the next promise in the queue
this.resolve = null;
// Pushes the initial promise on the queue which will
// resolve with the first observed event
this.enqueue();
}
// Create a new promise, save its resolve method, and
// store it on the queue
enqueue() {
this.promiseQueue.push(
new Promise((resolve) => this.resolve = resolve));
}
// Remove the promise at the front of the queue and
// return it
dequeue() {
return this.promiseQueue.shift();
}
async *fromEvent (element, eventType) {
// Whenever an event is generated, resolve the promise
// at the front of the queue with the event object and
// enqueue another promise.
element.addEventListener(eventType, (event) => {
this.resolve(event);
this.enqueue();
});
// Each resolved promise at the front of the queue will
// yield the event object to the async iterator
while (1) {
yield await this.dequeue();
}
}
}
With this fully defined class, it is now trivial to define an observable on DOM elements. Suppose the page has a <button>
inside it; you could capture a stream of click
events on this button and log each of them to the console as follows:
class Observable {
constructor() {
this.promiseQueue = [];
// Holds the resolver for the next promise in the queue
this.resolve = null;
// Pushes the initial promise on the queue which will
// resolve with the first observed event
this.enqueue();
}
// Create a new promise, save its resolve method, and
// store it on the queue
enqueue() {
this.promiseQueue.push(
new Promise((resolve) => this.resolve = resolve));
}
// Remove the promise at the front of the queue and
// return it
dequeue() {
return this.promiseQueue.shift();
}
async *fromEvent (element, eventType) {
// Whenever an event is generated, resolve the promise
// at the front of the queue with the event object and
// enqueue another promise.
element.addEventListener(eventType, (event) => {
this.resolve(event);
this.enqueue();
});
// Each resolved promise at the front of the queue will
// yield the event object to the async iterator
while (1) {
yield await this.dequeue();
}
}
}
(async function() {
const observable = new Observable();
const button = document.querySelector('button');
const mouseClickIterator = observable.fromEvent(button, 'click');
for await (const clickEvent of mouseClickIterator) {
console.log(clickEvent);
}
})();
In the ECMAScript 2018 specification, all the elegance of rest and spread operators in arrays is now also available inside object literals. This allows you to merge objects or collect properties into new objects.
The rest operator allows you to use a single rest operator when destructuring an object to collect all remaining unspecified enumerable properties into a single object. This can be done as follows:
const person = { name: 'Matt', age: 27, job: 'Engineer' };
const { name, …remainingData } = person;
console.log(name); // Matt
console.log(remainingData); // { age: 27, job: 'Engineer' }
The rest operator can be used at most once per object literal and must be listed last. Because there can be only a single rest operator per object literal, it is possible to nest the rest operators. When nesting, because there is no ambiguity about which elements of the property subtree are allocated to any given rest operator, the resulting objects will never overlap with respect to their contents:
const person = { name: 'Matt', age: 27, job: { title: 'Engineer', level: 10 } };
const { name, job: { title, …remainingJobData }, …remainingPersonData } = person;
console.log(name); // Matt
console.log(title); // Engineer
console.log(remainingPersonData); // { age: 27 }
console.log(remainingJobData); // { level: 10 }
const { …a, job } = person;
// SyntaxError: Rest element must be last element
The rest operator performs a shallow copy between objects, so object references will be copied instead of creating entire object clones:
const person = { name: 'Matt', age: 27, job: { title: 'Engineer', level: 10 } };
const { …remainingData } = person;
console.log(person === remainingData); // false
console.log(person.job === remainingData.job); // true
The rest operator will copy all enumerable own properties, including symbols:
const s = Symbol();
const foo = { a: 1, [s]: 2, b: 3 }
const {a, …remainingData} = foo;
console.log(remainingData);
// { b: 3, Symbol(): 2 }
The spread operator allows you to join two objects together in a fashion similar to array concatenation. The spread operator applied to an inner object will perform a shallow copy of all enumerable own properties, including symbols, into the outer object:
const s = Symbol();
const foo = { a: 1 };
const bar = { [s]: 2 };
const foobar = {…foo, c: 3, …bar};
console.log(foobar);
// { a: 1, c: 3 Symbol(): 2 }
The order in which the spread objects are listed matters for two reasons:
These ordering conventions are demonstrated here:
const foo = { a: 1 };
const bar = { b: 2 };
const foobar = {c: 3, …bar, …foo};
console.log(foobar);
// { c: 3, b: 2, a: 1}
const baz = { c: 4 };
const foobarbaz = {…foo, …bar, c: 3, …baz };
console.log(foobarbaz);
// { a: 1, b: 2, c: 4 }
As with the rest operator, all copies performed are shallow:
const foo = { a: 1 };
const bar = { b: 2, c: { d: 3 } };
const foobar = {…foo, …bar};
console.log(foobar.c === bar.c); // true
Formerly, there were only inelegant ways of defining behavior that would execute after a promise exits the “pending” state no matter the outcome. Usually, this would take the form of recycling the handler:
let resolveA, rejectB;
function finalHandler() {
console.log('finished');
}
function resolveHandler(val) {
console.log('resolved');
finalHandler();
}
function rejectHandler(err) {
console.log('rejected');
finalHandler();
}
new Promise((resolve, reject) => {
resolveA = resolve;
})
.then(resolveHandler, rejectHandler);
new Promise((resolve, reject) => {
rejectB = reject;
})
.then(resolveHandler, rejectHandler);
resolveA();
rejectB();
// resolved
// finished
// rejected
// finished
With Promise.prototype.finally()
, you are able to unify the shared handler. The finally()
handler is not passed any arguments and does not know if it is handling a resolved or rejected promise. The preceding example can be refactored to the following:
let resolveA, rejectB;
function finalHandler() {
console.log('finished');
}
function resolveHandler(val) {
console.log('resolved');
}
function rejectHandler(err) {
console.log('rejected');
}
new Promise((resolve, reject) => {
resolveA = resolve;
})
.then(resolveHandler, rejectHandler)
.finally(finalHandler);
new Promise((resolve, reject) => {
rejectB = reject;
})
.then(resolveHandler, rejectHandler)
.finally(finalHandler);
resolveA();
rejectB();
// resolved
// rejected
// finished
// finished
You'll note here that the order of logging has changed. Each finally()
creates a new promise instance, and these new promises are added to the browser's microtask queue and only resolved after the preceding handler promises are resolved.
ECMAScript 2018 features a handful of new bells and whistles for regular expressions.
One annoying quirk of regular expressions was that the single character match token (the period “.”) does not match line terminator characters, such as
and
, or non-BMP characters, such as emojis.
const text = `
foo
bar
`;
const re = /foo.bar/;
console.log(re.test(text)); // false
This proposal introduces the “s” flag (standing for singleline), which corrects for this behavior:
const text = `
foo
bar
`;
const re = /foo.bar/s;
console.log(re.test(text)); // true
Regular expressions support both positive and negative lookahead assertions, which allow declaration of expectations following a matched segment:
const text = 'foobar';
// Positive lookahead
// Assert that a value follows, but do not capture
const rePositiveMatch = /foo(?=bar)/;
const rePositiveNoMatch = /foo(?=baz)/;
console.log(rePositiveMatch.exec(text));
// ["foo"]
console.log(rePositiveNoMatch.exec(text));
// null
// Negative lookahead
// Assert that a value does not follow, but do not capture
const reNegativeNoMatch = /foo(?!bar)/;
const reNegativeMatch = /foo(?!baz)/;
console.log(reNegativeNoMatch.exec(text));
// null
console.log(reNegativeMatch.exec(text));
// ["foo"]
The new proposal introduces the mirror image of these assertions, positive and negative lookbehinds. These work identically to lookahead assertions, with the exception that they work to inspect content preceding the matched segment:
const text = 'foobar';
// Positive lookbehind
// Assert that a value precedes, but do not capture
const rePositiveMatch = /(?<=foo)bar/;
const rePositiveNoMatch = /(?<=baz)bar/;
console.log(rePositiveMatch.exec(text));
// ["bar"]
console.log(rePositiveNoMatch.exec(text));
// null
// Negative behind
// Assert that a value does not precede, but do not capture
const reNegativeNoMatch = /(?<!foo)bar/;
const reNegativeMatch = /(?<!baz)bar/;
console.log(reNegativeNoMatch.exec(text));
// null
console.log(reNegativeMatch.exec(text));
// ["bar"]
Typically, multiple capture groups were accessed by index, which was a terrific exercise in frustration because indices offer no context as to what they actually contain:
const text = '2018-03-14';
const re = /(d+)-(d+)-(d+)/;
console.log(re.exec(text));
//["2018-03-14", "2018", "03", "14"]
The proposal allows for associating a valid JavaScript identifier with a capture group that can then be retrieved from the groups property of the result:
const text = '2018-03-14';
const re = /(?<year>d+)-(?<month>d+)-(?<day>d+)/;
console.log(re.exec(text).groups);
// { year: "2018", month: "03", day: "14" }
The Unicode standard defines properties for each character. Character properties such as character name, categories, white space designation, and the script or language the character is defined inside are all available as character properties. Unicode property escapes allow you to use these properties inside regular expressions.
Some properties are binary, meaning they can be applied standalone. Examples of this are Uppercase
and White_Space
. Other properties behave as key/value pairs, where a property will correspond to a property value. An example of this is Script_Extensions=Greek
.
A list of Unicode properties can be found at http://unicode.org/Public/UNIDATA/PropertyAliases.txt
.
A list of Unicode property values can be found at http://unicode.org/Public/UNIDATA/PropertyValueAliases.txt
.
Unicode property escapes in regular expressions can use p
to select a match, or P
to select a non-match:
const pi = String.fromCharCode(0x03C0);
const linereturn = `
`;
const reWhiteSpace = /p{White_Space}/u;
const reGreek = /p{Script_Extensions=Greek}/u;
const reNotWhiteSpace = /P{White_Space}/u;
const reNotGreek = /P{Script_Extensions=Greek}/u;
console.log(reWhiteSpace.test(pi)); // false
console.log(reWhiteSpace.test(linereturn)); // true
console.log(reNotWhiteSpace.test(pi)); // true
console.log(reNotWhiteSpace.test(linereturn)); // false
console.log(reGreek.test(pi)); // true
console.log(reGreek.test(linereturn)); // false
console.log(reNotGreek.test(pi)); // false
console.log(reNotGreek.test(linereturn)); // true
ECMAScript 2019 added two methods to the Array
prototype, flat
()
and flatMap()
, which make array flattening operations much easier. Without these methods, flattening arrays is a nasty business which involved either an iterative or recursive solution.
The following is an example of how a simple recursive implementation might look without using these new methods:
function flatten(sourceArray, flattenedArray = []) {
for (const element of sourceArray) {
if (Array.isArray(element)) {
flatten(element, flattenedArray);
} else {
flattenedArray.push(element);
}
}
return flattenedArray;
}
const arr = [[0], 1, 2, [3, [4, 5]], 6];
console.log(flatten(arr))
// [0, 1, 2, 3, 4, 5, 6]
In many ways, this example resembles a tree data structure; each element in an array behaves like a child node, and non-array elements are leaves. Therefore, in this example the input array is a tree of height 2 with 7 leaves. Flattening this array is in essence an in-order traversal of the leaves.
It is sometimes useful to be able to specify how many levels of array nesting should be flattened. Consider the following example which modifies the initial implementation and allows for the flattening depth to be specified:
function flatten(sourceArray, depth, flattenedArray = []) {
for (const element of sourceArray) {
if (Array.isArray(element) && depth > 0) {
flatten(element, depth - 1, flattenedArray);
} else {
flattenedArray.push(element);
}
}
return flattenedArray;
}
const arr = [[0], 1, 2, [3, [4, 5]], 6];
console.log(flatten(arr, 1));
// [0, 1, 2, 3, [4, 5], 6]
To address these use cases, the Array.prototype.flat()
method was added. This method accepts a depth
argument (defaulting to a depth of 1) and returns a shallow copy of the Array
instance flattened to the specified depth. This is demonstrated here:
const arr = [[0], 1, 2, [3, [4, 5]], 6];
console.log(arr.flat(2));
// [0, 1, 2, 3, 4, 5, 6]
console.log(arr.flat());
// [0, 1, 2, 3, [4, 5], 6]
Because a shallow copy is performed, arrays with cycles will copy values from the source array when flattening:
const arr = [[0], 1, 2, [3, [4, 5]], 6];
arr.push(arr);
console.log(arr.flat());
// [0, 1, 2, 3, 4, 5, 6, [0], 1, 2, [3, [4, 5]], 6]
The Array.prototype.flatMap()
method allows you to perform a map operation before flattening the array. arr.flatMap(f)
is functionally equivalent to arr.map(f).flat()
, but arr.flatMap()
is more efficient since the browser only must perform a single traversal.
The function signature of flatMap()
is identical to map()
. A simple example is as follows:
const arr = [[1], [3], [5]];
console.log(arr.map(([x]) => [x, x + 1]));
// [[1, 2], [3, 4], [5, 6]]
console.log(arr.flatMap(([x]) => [x, x + 1]));
// [1, 2, 3, 4, 5, 6]
flatMap()
is especially useful in situations where a non-array object's method returns an array, such as split()
. Consider the following example, where a collection of input strings is split into words and joined into a single word array:
const arr = ['Lorem ipsum dolor sit amet,', 'consectetur adipiscing elit.'];
console.log(arr.flatMap((x) => x.split(/[W+]/)));
// ["Lorem", "ipsum", "dolor", "sit", "amet", "", "consectetur", "adipiscing", "elit", ""]
A handy trick (albeit one which may incur a performance hit) is to use an empty array to filter out results after a map. The following example extends the above example to strip out the empty strings:
const arr = ['Lorem ipsum dolor sit amet,', 'consectetur adipiscing elit.'];
console.log(arr.flatMap((x) => x.split(/[W+]/)).flatMap((x) => x || []));
// ["Lorem", "ipsum", "dolor", "sit", "amet", consectetur", "adipiscing", "elit"]
Here, each empty string in the results is first mapped to an empty array. When flattening, these empty arrays are effectively skipped in the array which is eventually returned.
ECMAScript 2019 added a static method fromEntries()
to the Object
class which builds an object from a collection of key-value array pairs. This method performs the opposite operation of Object.entries()
. This method is demonstrated here:
const obj = {
foo: 'bar',
baz: 'qux'
};
const objEntries = Object.entries(obj);
console.log(objEntries);
// [["foo", "bar"], ["baz", "qux"]]
console.log(Object.fromEntries(objEntries));
// { foo: "bar", baz: "qux" }
The static method expects an iterable object containing any number of iterable objects of size 2. This is especially useful in cases where you wish to convert a Map
instance to an Object
instance, as the Map
iterator's output exactly matches the signature that fromEntries()
ingests:
const map = new Map().set('foo', 'bar');
console.log(Object.fromEntries(map));
// { foo: "bar" }
ECMAScript 2019 added two methods to the String
prototype, trimStart()
and trimEnd()
, which allow for targeted whitespace removal. These methods are intended to replace trimLeft()
and trimRight()
which have ambiguous meaning in the context of right-to-left languages such as Arabic and Hebrew.
These two methods are effectively the opposite of padStart()
and padEnd()
with a single space character. The following example adds whitespace to a string and then removes it on either side:
let s = ' foo ';
console.log(s.trimStart()); // "foo "
console.log(s.trimEnd()); // " foo"
ECMAScript 2019 added the ability to inspect the optional Symbol
description via the description
property. Formerly, this was only available when the symbol was casted to a string:
const s = Symbol('foo');
console.log(s.toString());
// Symbol(foo)
With the ES2019 addition, each Symbol
object has a read-only description
property which exposes the description. If there is no description, this default to undefined
.
const s = Symbol('foo');
console.log(s.description);
// foo
Prior to ES2019, the structure of a try/catch block was fairly rigid. Even if you did not wish to use the error object caught, the parser still required you to assign the error object a variable name inside the catch
clause:
try {
throw 'foo';
} catch (e) {
// An error happened, but you don't care about the error object
}
In ES2019, you are now able to omit the error object assignment and simply ignore the error entirely:
try {
throw 'foo';
} catch {
// An error happened, but you don't care about the error object
}
ES2019 also introduces a handful of tweaks to existing tooling:
Array.prototype.sort()
is stable, meaning that equivalent objects will not be reordered in the output.JSON.stringify()
. Rather than return unpaired surrogate code points as single UTF-16 code units, they are now represented with JSON escape sequences.Function.prototype.toString()
. ES2019 requires that this method return the function's source code whenever possible, otherwise { [native code] }
.