Chapter 2. Modularity Principles

Modularity can be the answer to complexity, but what exactly do we mean when we’re talking about complexity?

Complexity is a loaded term for a nuanced topic. What does complex mean? A dictionary defines complex as something that’s “composed of many interconnected parts” but that’s not the problem we generally refer to when we speak of complexity in the context of programming. A program may have hundreds or thousands of files and still be considered relatively simple.1

The next two definitions, offered by that same dictionary, might be more revealing in the context of program design.

  • “Characterized by a very complicated or involved arrangement of parts, units, etc.”

  • “So complicated or intricate as to be hard to understand or deal with”

The first definition indicates that a program can become complex when its parts are arranged in a complicated manner; the interconnections among parts become a pain point. This could stem from convoluted interfaces or a lack of documentation, and it’s one of the aspects of complexity that we’ll tackle in this book.

We can interpret the second definition as the other side of the complexity coin. Components can be so complicated that their implementation is hard to understand, debug, or extend. Most of the book is devoted to counterbalancing and avoiding this aspect of complexity.

In broad terms, something is complex when it becomes hard to grasp or fully understand. By that definition, anything in a typical program can be complex: a block of code, a single statement, the API layer, its documentation, tests, the directory structure, coding conventions, or even a variable’s name.

Measuring complexity by lines of code proves to be trite: a file with thousands of lines of code can be simple if it’s just a list of constants like country codes or action types. Conversely, a file with two dozen lines of code could be insurmountably complex, not only in its interface but particularly in its implementation. Add together a few complex components and soon you’ll want nothing to do with the codebase.

Cyclomatic complexity is the number of unique code paths a program can take, and it may be a better metric when measuring the complexity of a component. Cyclomatic complexity allows us to measure only how complex a component has become. On its own, however, tracking this metric does little to significantly reduce complexity across our codebase or improve our coding style.

We must acknowledge that codebases are not fixed in time. Code­bases typically grow along with time, much like the products we build with them. There is no such thing as a finished product or the perfect codebase. We should develop application architecture that embraces the passage of time through the ability to adjust to new conditions.

A significant body of changes to an implementation should be able to leave the API in front of that implementation unmodified. It should be possible to extend the API surface of a component with ease, and ironing out the wrinkles of an outdated API shouldn’t be fraught with confusion or frustration. When we want to horizontally scale our program beyond single components, it should be straightforward instead of having to modify several existing components in order to accommodate each new one. How can modular design help us manage complexity both at the component level and at scale?

2.1 Modular Design Essentials

Modularity tackles the complexity problem in program design by opting for small modules with a clear-cut and well-tested API that’s also documented. Defining a precise API attacks interconnection complexity, while small modules aim to make programs easier to understand and work with.

2.1.1 Single Responsibility Principle

The single responsibility principle (SRP) is perhaps the most widely agreed upon principle of successful modular application design. Components are said to follow SRP when they have a single, narrow objective.

Modules that follow SRP do not necessarily have to export a single function as their API. As long as the methods and properties we export from a component are related, we aren’t breaking SRP.

When thinking in terms of SRP, it’s important to figure out what the responsibility is. Consider, as an example, a component used to send emails through the Simple Mail Transfer Protocol (SMTP). The choice to send emails using SMTP could be considered an implementation detail. If we later want the ability to render the HTML to be sent in those emails by using a template and a model, would that also pertain to the email-sending responsibility?

Imagine we developed email sending and templating in the same component. These would be tightly coupled. Furthermore, if we later wanted to switch from SMTP to the solution offered through the API for a transactional email provider, we’d have to be careful not to interfere with the templating capability that lies in the same module.

The following code snippet represents a tightly coupled piece of code that mixes templating, sanitization, email API client instantiation, and email sending:

import insane from 'insane'
import mailApi from 'mail-api'
import { mailApiSecret } from './secrets'
function sanitize (template, ...expressions) {
  return template.reduce((result, part, i) =>
    result + insane(expressions[i - 1]) + part
  )
}
export default function send (options, done) {
  const {
    to,
    subject,
    model: { title, body, tags }
  } = options
  const html = sanitize`
    <h1>${ title }</h1>
    <div>${ body }</div>
    <div>
    ${
      tags
        .map(tag => `${ <span>${ tag }</span> }`)
        .join(` `)
    }
    </div>
  `
  const client = mailApi({ mailApiSecret })
  client.send({
    from: `[email protected]`,
    to,
    subject,
    html
  }, done)
}

It might be better to create a separate component that’s in charge of rendering HTML based on a template and a model, instead of adding templating directly in the email-sending component. We could then add a dependency on the email module so that we can send that HTML, or we could create a third module where we’re concerned only with the wiring.

Provided its consumer-facing interface remained the same, an independent SMTP email component would be interchangeable with a component that sent emails some other way, such as via an API, logging to a data store, or writing to standard output. In this scenario, the way in which emails are sent would be an implementation detail, while the interface becomes more rigid as it’s adopted by more modules. An inflexible interface gives us flexibility in the way the task is performed, while allowing implementations to be replaced with ease according to the use case at hand.

The following example shows an email component that’s concerned only with configuring the API client and adhering to a thoughtful interface that receives the to recipient, the email subject, and its html body, and then sends the email. This component has the sole purpose of sending email:

import mailApi from 'mail-api'
import { mailApiSecret } from './secrets'

export default function send(options, done) {
  const { to, subject, html } = options
  const client = mailApi({ mailApiSecret })
  client.send({
    from: `[email protected]`,
    to,
    subject,
    html
  }, done)
}

It wouldn’t be hard to create a drop-in replacement by developing a module that adheres to the same send API but sends email in a different way. The following example uses a different mechanism, whereby we simply log to the console. Even though it doesn’t actually send any emails, this component could be useful for debugging purposes:

export default function send(options, done) {
  const { to, subject, html } = options
  console.log(`
    Sending email.
    To: ${ to }
    Subject: ${ subject }
    ${ html }`
  )
  done()
}

By the same token, a templating component could be developed orthogonally, with an implementation that’s not directly tied into email sending. The following example is extracted from our original coupled implementation, but is concerned only with producing a piece of sanitized HTML by using a template and the user-provided model:

import insane from 'insane'

function sanitize(template, ...expressions) {
  return template.reduce((result, part, i) =>
    result + insane(expressions[i - 1]) + part
  )
}

export default function compile(model) {
  const { title, body, tags } = model
  const html = sanitize`
    <h1>${ title }</h1>
    <div>${ body }</div>
    <div>
    ${
      tags
        .map(tag => `${ <span>${ tag }</span> }`)
        .join(` `)
    }
    </div>
  `
  return html
}

Slightly modifying the API shouldn’t be an issue, as long as it remains consistent across the components we want to make interchangeable. For instance, a different implementation could take a template identifier, in addition to the model object, so that the template itself is also decoupled from the compile function.

When we keep the API consistent across implementations,2 using the same signature across every module, it’s easy to swap out implementations depending on context such as the execution environment (development versus staging versus production) or any other dynamic context that we need to rely upon.

As mentioned earlier, a third module could plumb together different components that handle separate concerns, such as templating and email sending. The following example leverages the logging email provider and the static templating function to join both concerns together. Interestingly, this module doesn’t break SRP either, as its only concern is to plumb other modules together:

import { send } from './email/log-provider'
import { compile } from './templating/static'

export default function send (options, done) {
  const { to, subject, model } = options
  const html = compile(model)
  send({ to, subject, html }, done)
}

We’ve been discussing API design in terms of responsibility, but something equally interesting is that we’ve hardly worried about the implementation of those interfaces. Is there merit to designing an interface before digging into its implementation?

2.1.2 API First

A module is only as good as its public interface. A poor implementation may hide behind an excellent interface. More important, a great interface means we can swap out a poor implementation as soon as we find time to introduce a better one. Since the API remains the same, we can decide whether to replace the existing implementation altogether or whether both should coexist while we upgrade consumers to use the newer one.

A flawed API is a lot harder to repair. Several implementations may follow the interface we intend to modify, meaning that we’d have to change the API calls in each consumer whenever we want to make changes to the API itself. The number of API calls that potentially have to adapt increases with time, entrenching the API as the project grows.

Having a mindful design focus on public interfaces is paramount to developing maintainable component systems. Well-designed interfaces can stand the test of time by introducing new implementations that conform to that same interface. A properly designed interface should make it simple to access the most basic or common use cases for the component, while being flexible enough to support other use cases as they arise.

An interface often doesn’t have the necessity of supporting multiple implementations, but we must nonetheless think in terms of the public API first. Abstracting the implementation is only a small part of the puzzle. The answer to API design lies in figuring out which properties and methods consumers will need, while keeping the interface as small as possible.

When we need to implement a new component, a good rule of thumb is drawing up the API calls we’d need to make against that new component. For instance, we might want a component to interact with the Elasticsearch REST API. Elasticsearch is a database engine with advanced search and analytics capabilities, and its documents are stored in indices and arranged by type.

In the following piece of code, we’re fantasizing with an ./elasticsearch component that has a public createClient binding, which returns an object with a client#get method that returns a Promise. Note how detailed the query is, making up what could be a real-world keyword search for blog articles tagged modularity and javascript:

import { createClient } from './elasticsearch'
import { elasticsearchHost } from './secrets'

const client = createClient({
  host: elasticsearchHost
})
client
  .get({
    index: `blog`,
    type: `articles`,
    body: {
      query: {
        match: {
          tags: [`modularity`, `javascript`]
        }
      }
    }
  })
  .then(response => {
    // ...
  })

Using the createClient method, we could create a client, establishing a connection to an Elasticsearch server. If the connection is dropped, the component we’re envisioning will seamlessly reconnect to the server, but on the consumer side, we don’t necessarily want to worry about that.

Configuration options passed to createClient might tweak how aggressively the client attempts to reconnect. A backoff setting could toggle whether an exponential back-off mechanism should be used: the client waits for increasing periods of time if it’s unable to establish a connection.

An optimistic setting that’s enabled by default could prevent queries from settling in rejection when a server connection isn’t established, by having them wait until a connection is established before they can be made.

Even though the only setting explicitly outlined in our imagined API usage example is host, it would be simple for the implementation to support new settings in its API without breaking backward compatibility.

The client#get method returns a promise that’ll settle with the results of asking Elasticsearch about the provided index, type, and query. When the query results in an HTTP error or an Elasticsearch error, the promise is rejected. To construct the endpoint, we use the index, type, and the host that the client was created with. For the request payload, we use the body field, which follows the Elasticsearch Query DSL.3 Adding more client methods, such as put and delete, would be trivial.

Following an API-first methodology is crucial in understanding how the API might be used. By placing our foremost focus on the interface, we are purposely avoiding the implementation until there’s a clear idea of what interface the component should have. Then, once we have a desired interface in mind, we can begin implementing the component. Always write code against an interface.

Note how the focus is not only on what the example at hand addresses directly but also on what it doesn’t address: room for improvement, corner cases, how the API might change going forward, and whether the existing API can accommodate more uses without breaking backward compatibility.

2.1.3 Revealing Pattern

When everything in a component is made public, nothing can be considered an implementation detail, and thus making changes becomes hard. Prefixing properties with an underscore is not enough for consumers not to rely on them; a better approach is not to reveal private properties in the first place.

By exposing only what’s meant to be used by external consumers, a component avoids a world of trouble. Consumers don’t need to worry about undocumented touchpoints meant for internal use, however tempting, because they’re not exposed in the first place. Component makers don’t need to be concerned about consumers using touchpoints that were meant to be internal when they want to internalize them.

Consider the following piece of code, which externalizes the entire implementation of a simple counter object. Even though it’s not meant to be part of the public API, as indicated by its underscore prefix, the _state property is still exposed:

const counter = {
  _state: 0,
  increment() { counter._state++ },
  decrement() { counter._state-- },
  read() { return counter._state }
}
export default counter

It’s better to explicitly expose the methods and properties we want to make public:

const counter = {
  _state: 0,
  increment() { counter._state++ },
  decrement() { counter._state-- },
  read() { return counter._state }
}
const { increment, decrement, read } = counter
const api = { increment, decrement, read }
export default api

This is akin to the way some libraries were written in the days before JavaScript had proper modules: we would wrap everything in a closure so that it wouldn’t leak globals and our implementation would stay private and then return a public API. For reference, the next code snippet shows an equivalent component using a closure instead:

(function(){
  const counter = {
    _state: 0,
    increment() { counter._state++ },
    decrement() { counter._state-- },
    read() { return counter._state }
  }
  const { increment, decrement, read } = counter
  const api = { increment, decrement, read }
  return api
})()

When exposing touchpoints on an interface, it’s important to gauge whether consumers need the touchpoint at all, how it helps them, and whether it could be made simpler. For instance, instead of exposing several touchpoints the user can select from, the user might be better off with a single touchpoint that leverages the appropriate code path based on provided inputs. At the same time, the component would couple a smaller part of its implementation to its interface.

Thinking in API-first terms can help: if we have a decent idea of the kind of API surface we want, we can then decide how we want to allow consumers to interact with the component.

As new use cases arise and our component system grows, we should stick to an API-first mindset and the revealing pattern, so that the component doesn’t suddenly become more complex. Gradually introducing complexity can help us design the right interface for our component. This interface doesn’t offer every solution imaginable, but also elegantly solves the consumer’s use cases, provided they fall within the responsibility of our component.

2.1.4 Finding the Right Abstractions

Open source software components often get feature requests that are overly specific to the needs of one particular user. Taking feature requests or requirements at face value is not enough. Instead, we need to dive deeper and find commonalities between the feature that’s being requested, features that we may have planned for our roadmap, and features we might want to adapt our component to support in the future.

Granted, it’s important for a component to satisfy the needs of most of its consumers, but this doesn’t mean we should attempt to satisfy use cases one by one, or in isolation. Almost invariably, doing so results in duplicated logic, inconsistency at the API level, and several ways of accomplishing the same goal, often with inconsistent observed results.

When a commonality can be found, abstractions involve less friction and help avoid the inconsistencies named earlier. Consider, for example, the case of DOM event listeners: we have an HTML attribute and matching JavaScript DOM element property for each event handler, such as onclick, onchange, oninput, and so on. Each property can be assigned a listener function that handles the event. Then there’s EventTarget#addEventListener, which has a signature like addEventListener(type, listener, options),4 centralizing all event-handling logic in a single method that takes the type of event as a parameter. Naturally, this API is better for several reasons. First off, EventTarget#addEventListener is a method, making its behavior clearly defined. Meanwhile, on* handlers are set through assignment, which isn’t as clearly defined: when does the effect of assigning an event handler begin? How is the handler removed? Are we limited to a single event handler, or is there a way around it? Are we going to get an error when we assign a nonfunction value as an event listener? Will the raised event result in an error when trying to invoke the nonfunction? Furthermore, new event types can be added transparently to addEventListener, without having to change the API surface, whereas with the on* technique, we would have to introduce yet another property.

Another case in which abstractions come in handy might occur whenever we are dealing with quirks in cross-browser DOM manipulation. It would be superior to have a function like on(element, eventType, eventListener) rather than testing whether addEventListener is supported and deciding which of the various event-listening options is optimal for each case, every time. The abstraction drastically reduces code duplication while also handling every case consistently, limiting complexity.

The preceding cases are clear-cut examples of when an abstraction greatly improves poor interfaces, but that’s not always the end result. Abstractions can be a costly way of merging use cases when it’s unclear whether those are naturally related in the first place. If we merge use cases too early, we might find that the complexity we’re tucking away in an abstraction is quite small—and thus offset by the abstraction’s own complexity. If we merge cases that aren’t all that related to begin with, we effectively increase complexity and end up creating a tighter coupling than needed. Instead of lowering complexity as we set out to achieve, we end up obtaining the opposite result.

It is best to wait until a distinguishable pattern emerges and it becomes clear that introducing an abstraction will help diminish complexity. When such a pattern emerges, we can be confident that the use cases are indeed related, and we’ll have better information about whether an abstraction would simplify our code.

Abstractions can generate complexity by introducing new layers of indirection, chipping away at our ability to follow the different code flows around a program. On the other hand, state generates complexity by dynamically modifying the flow in our programs. Without state, programs would run in the same way from start to finish.

2.1.5 State Management

Applications wouldn’t do much of anything if we didn’t keep state. We need to keep track of things like user input or the page we’re currently on to determine what to display and how to help the user. In this sense, state is a function of user input: as the user interacts with our application, state grows and mutates.

Application state comes from stores such as a persistent database or an API server’s memory cache. This kind of state can be affected by user interaction, such as when a user decides to write a comment.

Besides state for an individual user and application-wide state, there’s also the intermediate state that lies in our program’s code. This state is transient and is typically bound to a particular transaction: a server-side web request, a client-side browser tab, and—at a lower level—a class instance, a function call, or an object’s property.

We shall think of state as our program’s internal entropy. When state reigns, entropy reigns, and the application becomes unbearably hard to debug. One of the goals in modular design is to keep state to the smallest minimum possible. As an application grows larger, so does its state, and the possible state permutations grow with it. Modularity takes aim at this issue by chopping a state tree into manageable bits and pieces; each branch of the tree deals with a particular subset of the state. This approach enables us to contain the growing application state as our codebase grows in size.

A function is deemed pure when its output depends solely on its input. Pure functions do not produce any side effects other than the output that’s returned. In the following example, the sum function receives a list of numbers and returns the sum of adding all of them together. It is a pure function because it doesn’t take into account any external state, and it doesn’t emit any side effects:

function sum(numbers) {
  return numbers.reduce((a, b) => a + b, 0)
}

Sometimes we have a requirement to keep state across function calls. For instance, a simple incremental counter might lead us to implement a module such as the following. The increment function isn’t pure, given that count is an external state:

let count = 0
const increment = () => count++
export default increment

An artifact of this module exporting an impure function is that the outcome of invoking increment hinges upon understanding how it is used elsewhere in the application, as each call to increment changes its expected output. As the amount of code in our program increases, so do the potential ways for an impure function like increment to behave, making impure functions increasingly undesirable.

One potential solution is to expose a factory that is itself pure, even when the objects returned by the factory aren’t pure. In this piece of code, we’re now returning a factory of counters; factory isn’t affected by external outputs and is thus considered pure:

const factory = () => {
  let count = 0
  const increment = () => count++
  return increment
}
export default factory

As long as we limit the usage of each counter spewed by factory to given portions of the application which know about each other’s usage, the state becomes more manageable, as we end up with fewer moving parts involved. When we eliminate impurity in public interfaces, we’re effectively circumscribing entropy to the calling code. The consumer receives a brand-new counter every time, and it’s entirely responsible for managing its state. It can still pass the counter down to its dependents, but it’s in control of the way dependents get to manipulate that state, if at all.

This is something we observe in the wild, with popular libraries such as the request package in Node.js, which can be used to make HTTP requests.5 The request function relies largely on sensible defaults for the options you can pass to it. Sometimes we want to make requests using a different set of defaults.

The library might’ve offered a solution enabling us to change the default values for every call to request. This would’ve been poor design, as it’d make their handling of options more unstable; we’d have to take into account every corner of our codebase before we could be confident about the options we’d ultimately end up with when calling request.

request chose a solution that uses a request.defaults(options) method to return an API identical to that of request, but with the new defaults applied on top of the existing defaults. This approach avoids surprises, since usage of the modified request is constrained to the calling code and its dependents.

2.2 CRUST: Consistent, Resilient, Unambiguous, Simple, and Tiny

A well-regarded API typically packs several of the following traits. It is consistent, meaning it is idempotent6 and has a similar signature shape as that of related functions. It is resilient, meaning its interface is flexible and accepts input expressed in a few ways, including optional parameters and overloading. Yet, it is unambiguous: there aren’t multiple interpretations of how the API should be used, what it does, how to provide inputs, or how to understand the output. Through all of this, it manages to stay simple: it’s straightforward to use and handles common use cases with little to no configuration, while allowing customization for advanced use cases. Lastly, a CRUST interface is also tiny: it meets its goals but isn’t overdesigned, it comprises the smallest possible surface area while allowing for future nonbreaking extensibility. CRUST mostly pertains to the outer layer of a system (be it a package, a file, or a function), but its principles will seep into the innards of its components and result in simpler code overall.

That’s a lot to take in. Let’s try to break down the CRUST principle. In this section, we explore these traits, detailing what they mean and why it’s important that our interfaces follow them.

2.2.1 Consistency

Humans excel at identifying patterns, and we do so while reading as well. That’s partly the reason—besides context—that we can read sentences even when most of the vowels are removed. Deliberately establishing consistent patterns makes our code easier to read, and eliminates surprises requiring us to investigate why two equivalent pieces of code look the same, even though they perform the same job. Could it be that the task they perform is slightly different, or is it just the code that’s different, but the end result is the same?

When a set of functions has the same API shape, consumers can intuitively deduce how the next function is used. Consider the native Array, where #forEach, #map, #filter, #find, #some, and #every all accept a callback as their first parameter and optionally take the context when calling that callback as their second parameter. Further, the callback receives the current item, that item’s index, and the array itself as parameters. The #reduce and #reduceRight methods are a little different, however, because the callback receives an accumulator parameter in the first position, but then it goes on to receive the current item, that item’s index, and the array, making the shape quite similar to what we are accustomed to.

As a result, we rarely need to reach for documentation in order to understand how these functions are shaped. The difference lies solely in how the consumer-provided callback is used, and what the return value for the method is. #forEach doesn’t return a value. #map returns the result of each invocation. #filter returns only the items for which the callback returns a truthy value. #some returns false unless the callback returns a truthy value for one of the items, in which case it returns true and breaks out of the loop. #every returns false unless the callback returns a truthy value for every item, in which case it returns true.

When we have different shapes for functions that perform similar tasks, we need to make an effort to remember each individual function’s shape instead of being able to focus on the task at hand. Consistency is valuable on every level of a codebase: consistent code style reduces friction among developers and conflicts when merging code, consistent shapes optimize readability and give way to intuition, consistent naming and architecture reduces surprises and keeps code uniform.

Uniformity is desirable for any given layer in an application because a uniform layer can be largely treated as a single, atomic portion of the codebase. If a layer isn’t uniform, then the consumer struggles to consume or feed data into that part of the application in a consistent manner.

The other side of this coin is resiliency.

2.2.2 Resiliency

Offering interfaces which are consistent with each other in terms of their shapes is important, and making those interfaces accept input in different ways is often just as important, although flexibility is not always the right call. Resiliency is about identifying the kinds of inputs that we should accept, and enforcing an interface where those are the only inputs we accept.

One prominent example of flexible inputs can be found in the jQuery library. With over ten polymorphic overloads7 on its main $ function, jQuery is able to handle virtually any parameters we throw at it. What follows is a complete list of overloads for the $ function, which is the main export of the jQuery library.

  • $()

  • $(selector)

  • $(selector, context)

  • $(element)

  • $(elementArray)

  • $(object)

  • $(selection)

  • $(html)

  • $(html, ownerDocument)

  • $(html, attributes)

  • $(callback)

Though it’s common for JavaScript libraries to offer a getter and a setter as overloads of the same method, API methods should generally have a single, well-defined responsibility. Most of the time, this translates into clean-cut API design. In the case of the dollar function, we have three use cases:

  • $(callback) binds a function to be executed when the DOM has finished loading.

  • $(html) overloads create elements out of the provided html.

  • Every other overload matches elements in the DOM against the provided input.

While we might consider selectors and element creation to play the role of getters and setters, the $(callback) overload feels out of place. We need to take a step back and realize that jQuery is a decades-old library that revolutionized frontend development due to, in no small part, its ease of use. Back in the day, the requirement to wait for the DOM-ready event was in heavy demand, and so letting consumers listen for the DOM-ready event with the dollar function made sense. Needless to say, jQuery is a unique case, but it’s nevertheless an excellent example of how providing multiple overloads can result in a dead-simple interface, even when there are more overloads than users can keep in the back of their heads. Most methods in jQuery offer several ways for consumers to present inputs without altering the responsibilities of those methods.

A new library with a shape similar to jQuery would be a rare find. Modern JavaScript libraries and applications favor a more modular approach, and so the DOM-ready callback would be its own function, and probably its own package. There’s still insight to be gained by analyzing jQuery, though. This library has a great user experience because the jQuery interface rarely misinterprets inputs nor produces surprising output. One of the choices observed in jQuery’s architecture was not to throw errors that resulted from bugs, user errors in our own code, or invalid selectors, in order to avoid frustrated users. Whenever jQuery finds an inappropriate input parameter, it prefers to return an empty list of matches instead. Silent failures can, however, be tricky: they might leave the consumer without any cues about the problem—whether it’s an issue in their code, a bug in the library they’re using, or something else.

Even when a library is as flexible as jQuery, it’s important to identify invalid input early. As an example, the next snippet shows how jQuery throws an error on selectors it can’t parse:

$('{div}')
// <- Uncaught Error: unrecognized expression: {div}

Besides overloading, jQuery also comes with a wealth of optional parameters. Although overloads are meant as different ways of accepting one particular input, optional parameters serve a different purpose, one of augmenting a function to support more use cases.

A good example of optional parameters is the native DOM fetch API. In the next snippet, we have two fetch calls. The first one receives only a string for the HTTP resource we want to fetch, and a GET method is assumed. In the second example, we’ve specified the second parameter, and indicated that we want to use the DELETE HTTP verb:

await fetch('/api/users')
await fetch('/api/users/rob', {
  method: 'DELETE'
})

Suppose that, as the API designers for fetch, we originally devised it as just a way of doing GET ${ resource }. When we get a requirement for a way of choosing the HTTP verb, we could avoid the options object and reach directly for a fetch(resource, verb) overload. Although this would serve our particular requirement, it would be shortsighted. As soon as we get a requirement to configure something else, we’d be left with the need to support both fetch(resource, verb) and fetch(resource, options) overloads, so that we avoid breaking backward compatibility. Worse still, we might be tempted to introduce a third parameter that configures our next requirement. Soon, we’d end up with an API such as the infamous KeyboardEvent#initKeyEvent method, whose signature is outlined here:8

event.initKeyEvent(type, bubbles, cancelable, viewArg,
                   ctrlKeyArg, altKeyArg, shiftKeyArg,
                   metaKeyArg, keyCodeArg, charCodeArg)

To avoid this trap, it is paramount to identify the core use case for a function—say, parsing Markdown—and then allow ourselves only one or two important parameters before going for an options object. In the case of initKeyEvent, the only parameter that we should consider important is the type, and everything else can be placed in an options object:

event.initKeyEvent(type, { bubbles, cancelable, viewArg,
                   ctrlKeyArg, altKeyArg, shiftKeyArg,
                   metaKeyArg, keyCodeArg, charCodeArg })

A key aspect of API design is readability. How far can users get without having to reach for the documentation? In the case of initKeyEvent, not very; unless they memorize the position of each of 10 parameters and their default values, chances are they’re going to reach for the documentation every time. When designing an interface that might otherwise end up with four or more parameters, an options object carries a multitude of benefits:

  • Consumers can declare options in any order, as the arguments are no longer positional inside the options object.

  • The API can offer default values for each option. This helps the consumer avoid specifying defaults just so that they can change another positional parameter.9

  • Consumers don’t need to concern themselves with options they don’t need.

  • Developers reading pieces of code that consume the API can immediately understand which parameters are being used, because they’re explicitly named in the options object.

As we make progress, we naturally keep coming back to the options object in API design.

2.2.3 Unambiguity

The output shape for a function shouldn’t depend on how it received its input or the result that was produced. This rule is almost universally agreed upon: you should aim to surprise consumers of your API as little as possible. In a couple of cases, we may slip up and end up with an ambiguous API. For the same kind of result, we should return the same kind of output.

For instance, Array#find always returns undefined when it doesn’t find any items that match the provided predicate function. If it instead returned null when the array is empty, for example, that’d be inconsistent with other use cases, and thus wrong. We’d be making consumers unsure about whether they should test for undefined or null, and they might end up being tempted to use a loose equality comparison because of that uncertainty, given == null matches both null and undefined.

In the same vein, we should avoid optional input parameters that transform the result into a different data type. Favor composability—or a new method—instead, where possible. An option that indicates whether a raw object such as a Date or a DOM element should be wrapped in an instance of jQuery or similar libraries such as moment before returning the result, or a json option that causes the result to be a JSON-formatted string when true and an object otherwise is ill-advised, unless there are technical reasons we must do so.

It isn’t necessary to treat failure and success with the same response shape, meaning that failure results can always be null or undefined, while success results might be an array list. However, consistency should be required across all failure cases and across all success cases, respectively.

Having consistent data types mitigates surprises and improves the confidence a consumer has in our API.

2.2.4 Simplicity

Note how simple it is to use fetch in the simplest case: it receives the resource we want to GET and returns a promise that settles with the result of fetching that resource:

const res = await fetch('/api/users/john')
console.log(res.statusCode)
// <- 200

If we want to take things a bit further, we can chain a .json() call onto the response object to find out more about the exact response:

const res = await fetch('/api/users/john')
const data = res.json()
console.log(data.name)
// <- 'John Doe'

If we instead want to remove the user, we need to provide the method option:

await fetch('/api/users/john', {
  method: `DELETE`
})

The fetch function can’t do much without a specified resource, which is why this parameter is required and not part of an options object. Having sensible defaults for every other parameter is a key component of keeping the fetch interface simple. The method defaults to GET, which is the most common HTTP verb and thus the one we’re most likely to use. Good defaults are conservative, and good options are additive. The fetch function doesn’t transmit any cookies by default (a conservative default) but a credentials option set to include makes cookies work (an additive option).

In another example, we could implement a Markdown compiler function with a default option that supports autolinking resource locators, which can be disabled by the consumer with an autolinking: false option. In this case, the implicit default would be autolinking: true. Negated option names such as avoidAutolinking are sometimes justified because they make it so that the default value is false, which on the surface sounds correct for options that aren’t user-provided. Negated options, however, tend to confuse users who are confronted with the double negative in avoidAutolinking: false. It’s best to use additive or positive options, preventing the double negative: autolinking: true.

Going back to fetch, note how little configuration or implementation-specific knowledge we need for the simplest case. This hardly changes when we need to choose the HTTP verb, since we just need to add an option. Well-designed interfaces make it appear effortless for consumers to use the API for its simplest use case, and have them spend a little more effort for slightly more complicated use cases. As the use case becomes more complicated, so does the way in which the interface needs to be bent. This is because we’re taking the interface to the limit, but it goes to show how much work can be put into keeping an interface simple by optimizing for common use cases.

2.2.5 Tiny Surface Areas

Any interface benefits from being its smallest possible self. A small surface area means fewer test cases that could fail, fewer bugs that may arise, fewer ways in which consumers might abuse the interface, less documentation, and more ease of use since there’s less to choose from.

The malleability of an interface depends on the way it is consumed. Functions and variables that are private to a module are depended upon only by other parts of that module, and are thus highly malleable. The bits that make up the public API of a module are not as malleable since we might need to change the way each dependent uses our module. If those bits make up the public API of the package, then we’re looking at bumping our library’s version so that we can safely break its public API without major and unexpected repercussions.

Not all changes are breaking changes, however. We might learn from an interface like the one in fetch, for example, which remains highly malleable even in the face of change. Even though the interface is tiny for its simplest use case (GET /resource) the options parameter can grow by leaps and bounds without causing trouble for consumers, while extending the capabilities of fetch.

We can avoid creating interfaces that contain several slightly different solutions for similar problems by holistically designing the interface to solve the underlying common denominator, maximizing the reusability of a component’s internals in the process.

Having established a few fundamentals of module thinking and interface design principles, it’s time for us to shift our attention to module internals and implementation concerns.

1 Further details of the dictionary definition might help shed light on this topic.

2 For example, one implementation might merely compile an HTML email by using inline templates, another might use HTML template files, another could rely on a third-party service, and yet another could compile emails as plain-text instead.

3 You can check out the Elasticsearch Query DSL documentation.

4 The options parameter is an optional configuration object that’s relatively new to the web API. We can set flags such as capture, which has the same behavior as passing a useCapture flag; passive, which suppresses calls to event.preventDefault() in the listener; and once, which indicates that the event listener should be removed after being invoked for the first time.

5 You can find request on GitHub.

6 For a given set of inputs, an idempotent function always produces the same output.

7 When a function has overloaded signatures which can handle two or more types (such as an array or an object) in the same position, the parameter is said to be polymorphic. Polymorphic parameters make functions harder for compilers to optimize, resulting in slower code execution. When this polymorphism is in a hot path—that is, a function that gets called very often—the performance implications have a larger negative impact. Read more about the compiler implications in “What’s Up with Monomorphism” by Vyacheslav Egorov.

8 See the MDN documentation.

9 Assuming we have a createButton(size = 'normal', type = 'primary', color = 'red') method and we want to change its color, we’d have to use createButton('normal', 'primary', 'blue') to accomplish that, only because the API doesn’t have an options object. If the API ever changes its defaults, we’d have to change any function calls accordingly as well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset