Provisioning and Deployment Services

In Part III of this book, we look at how to design services and applications to be deployable. Here let’s look at the supporting infrastructure to perform the deployments themselves.

Deployment may be the most well-trodden area of operations tools. It’s an obvious nexus between development and production. To some organizations, deployment is “DevOps.” It’s understandable. In many organizations deployment is ridiculously painful, so it’s a good place to start making life better.

Consequently, a host of deployment tools represent “push” and “pull” methods. A push-style tool uses SSH or another agent so a central server can reach out to run scripts on the target machines. The machines may not know their own roles. The server assigns them.

In contrast, pull-based deployment tools rely more on the machines to know their own roles. Software on the machine reaches out to a configuration service to grab the latest bits for its role.

Pull-based tools work especially well with elastic scaling. Elastically scaled virtual machines or containers have ephemeral identities, so there’s no point in having a push-based tool maintain a mapping from machine identity to role—the machine identity will shortly disappear, never to be seen again! With long-lived virtual machines or even physical hosts, push-based tools can be simpler to set up and administer. That’s because they use commodity software like SSH rather than agents that require their own configuration and authentication techniques.

The deployment tool by itself should be augmented with a package repository. Whether that’s an official “artifact repository” tool or an S3 bucket is up to you. But it’s important to have a location for blessed binary bits that isn’t populated from a developer’s laptop. Production builds need to be run on a clean build server using libraries with known provenance. The build pipeline should tag the build as it passes various stages, especially verification steps like unit or integration tests.

This isn’t just being pedantic or jumping through hoops to satisfy a security department. Repeatable builds are important so code that works on your machine works in production, too.

Canary deployments are an important job of the build tooling. The “canary” is a small set of instances that get the new build first. For a period of time, the instances running the new build coexist with instances running the old build. (See Chapter 14, Handling Versions, to enable peaceful coexistence.) If the canary instances behave oddly, or their metrics go south, then the build is not rolled out to the remaining population.

Like every other stage of build and deployment, the purpose of the canary deployment is to reject a bad build before it reaches the users.

At a larger scale, the deployment tool needs to interact with another service to decide on placement. That placement service will determine how many instances of a service to run. It should be network-aware so it can place instances across network regions for availability. Typically, it’ll also drive the interconnect layer to set up IP addresses, VLANs, load balancers, and firewall rules.

When you get to this scale, it’s probably time to look at the platform players. We’ll cover those a bit later in The Platform Players. Even though a dedicated team will sustain and operate the platform, you’ll want to learn what it can do. That’s because your software needs to include a description of its needs and wants for the platform to provide (usually as a JSON or YAML file in the build artifacts.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset