10
Building a delivery pipeline for microservices

This chapter covers

  • Designing a continuous delivery pipeline for a microservice
  • Using Jenkins and Kubernetes to automate deployment tasks
  • Managing staging and production environments
  • Using feature flags and dark launches to distinguish between deployment and release

Rapidly and reliably releasing new microservices and new features to production is crucial to successfully maintaining a microservice application. Unlike a monolithic application, where you can optimize deployment for a single use case, microservice deployment practices need to scale to multiple services, written in different languages, and each with their own dependencies. Investing in consistent and robust deployment tooling and infrastructure will go a long way toward making a success of any microservice project.

You can achieve reliable microservice releases by applying the principles of continuous delivery. The fundamental building block of continuous delivery is a deployment pipeline. Picture a factory production line: a conveyer belt takes your software from code commits to deployable artifact to running software, while continually assessing the quality of the output at each stage. Doing this leads to frequent, small deployments, rather than big-bang changes, to production.

So far, you’ve built and deployed a service using Docker, Kubernetes, and command-line scripts. In this chapter, you’ll combine those steps into an end-to-end build pipeline, using Jenkins, a widely used open source build automation tool. Along the way, we’ll examine how this approach minimizes risk and increases the stability of your overall application. After that, we’ll examine the difference between deploying new code and releasing new features.

10.1 Making deploys boring

Deploying software should be boring. You should be able to roll out changes and new features without peering through your fingers or obsessively watching error dashboards.

Unfortunately, as we mentioned in chapter 8, human error causes most issues in production, and microservice deployments leave plenty of room for that! Consider the big picture: teams are developing and deploying tens — if not hundreds — of independent services on their own schedule, without explicit coordination or collaboration between teams. Any bad change to a service might have a wide-ranging impact on the performance of other services and the wider application.

An ideal microservice deployment process should meet two goals:

  • Safety at pace — The faster you can deploy new services and changes, the quicker you can iterate and deliver value to your end users. Deployment should maximize safety: you should validate, as much as feasible, that a given change won’t negatively impact the stability of a service.
  • Consistency — Consistency of deployment process across different services, regardless of underlying tech stack, helps alleviate technical isolation and makes operations more predictable and scalable.

It’s not easy to maintain the fine balance between safety and pace. You could move quickly without safety by deploying code changes directly to production, but that’d be crazy. Likewise, you could achieve stability by investing in a time-consuming change-control and approval process, but that wouldn’t scale well to the high volume of change in a large, complex microservice application.

10.1.1 A deployment pipeline

Continuous delivery strikes an ideal balance between reducing risk and increasing speed:

  • Releasing smaller sets of commits increases safety by reducing the amount of change happening at any one time. Smaller changesets are also easier to reason through.
  • An automated pipeline of commit validation increases the probability that a given changeset is free from defects.

Releasing small changesets and systematically verifying their quality gives teams the confidence to release features rapidly. Smaller, more atomic releases are less risky. The continuous delivery approach empowers teams to ship services rapidly and independently.

One of the weaknesses of monolithic development is that releases often become large, coupling together disparate features at release time. Likewise, even small changes in a large application can have an unintentionally broad impact, particularly when made to cross-cutting concerns. At worst, commits in monolithic development become stale while waiting for a deployment; they’re no longer relevant to the needs of the application or business by the time they reach customers.

Let’s look at an example in figure 10.1. Most of the steps in this pipeline should look familiar:

  1. First, an engineer commits some code to a microservice repository.
  2. Next, a build automation server builds the code.
  3. If the build is successful, the automation server runs unit tests to validate that code.
  4. If these tests pass, the automation server packages the service for deployment and stores this package in an artifact repository.
  5. The automation server deploys code to a staging environment, where you can test the service against other live collaborators.
  6. If this is successful, the automation server will deploy the code to a production environment.
c10_01.png

Figure 10.1 An example deployment pipeline builds, validates, and deploys a commit to production, providing feedback to engineers.

Each step in this pipeline provides feedback to the engineering team on the correctness of their code. For example, if step 3 fails, you’ll receive a list of failed test assertions to correct.

Implementing this pipeline should make the process of deployment highly visible and transparent — crucial for an audit trail, or if something goes wrong. Regardless of the underlying language or technology, every service you deploy should be able to follow a similar process.

10.2 Building a pipeline with Jenkins

In the previous chapter, you ran command-line scripts to perform steps in deployment: building containers, publishing artifacts, and deploying code. Now you’ll use Jenkins — a build automation tool — to connect those steps together into a coherent, reusable, and extensible deployment pipeline. We’ve picked Jenkins because it’s open source, is easy to get running, supports scriptable build jobs, and is widely used.

Unfortunately, no perfect out-of-the-box solution for deployment is available: any pipeline is usually a combination of multiple tools, depending on both the service’s tech stack and the target deployment platform. In your case, you’ll be using Jenkins to assemble tools you’ve (mostly) already used. Figure 10.2 illustrates the components of your deployment pipeline.

In the next few sections, we’re going to cover a lot of ground:

  • Using Jenkins to script complex deployment pipelines
  • Building a pipeline that builds, tests, and deploys your service to different environments
  • Managing staging environments for microservices
  • Reusing your deployment pipeline across multiple services
c10_02.png

Figure 10.2 The deployment pipeline you’ll use, which combines multiple tools dependent on the tech stack and target deployment platform you’re using

You’ll need to have access to a running Jenkins instance to run the examples in this chapter. The appendix walks you through Jenkins setup on a local Minikube cluster — we’ll assume you’re using that approach in the following sections.

10.2.1 Configuring a build pipeline

The Jenkins application consists of a master node and, optionally, any number of agents. Running a Jenkins job executes scripts (using common tools, such as make or Maven) across these agent nodes to perform deployment activities. A job operates within a workspace — a local copy of your code repository. Figure 10.3 illustrates this architecture.

To write your build pipeline, you’re going to use a feature called Scripted Pipeline. In Scripted Pipeline, you can express a build pipeline using a general-purpose domain-specific language (DSL) written in Groovy. This DSL defines common methods for writing build jobs, such as sh (for executing shell scripts) and stage (for identifying different parts of a build pipeline). The Scripted Pipeline approach is more powerful than you might think — by the end of the chapter, you’ll use it to build your own higher level, declarative DSL.

Jenkins will execute build jobs by executing a pipeline script defined in a Jenkinsfile. Try it yourself! First, copy the contents of chapter-10/market-data into a new directory and push that to a Git repo. It’s easiest if you push it to somewhere public, like GitHub. This is the service you’ll be deploying in this chapter.

c10_03.png

Figure 10.3 A Jenkins deployment consists of a master node, which manages execution, and agents that perform build tasks within a workspace — a clone of the repository being built.

Next, you’ll want to create a Jenkinsfile in the root of your repository, and it should look like the following listing.

Listing 10.1 A basic Jenkinsfile

stage("Build Info") {    ①  
  node {    ②  
    def commit = checkout scm    ③  
    echo "Latest commit id: ${commit.GIT_COMMIT}"
  }
}

When Jenkins runs this script, the script will check out a code repository as a workspace and write the latest commit ID to the console.

You can try this out by setting up a pipeline job for a service. Commit the Jenkinsfile you just created, then push your changes to origin. Now, open the Jenkins UI. (Remember, you can do this with minikube service jenkins.) Follow these steps to create a multibranch pipeline job:

  1. Navigate to the Create New Jobs page.
  2. Enter an item name, market-data; select Multibranch Pipeline as the job type; and click OK.
  3. On the following page (see figure 10.4), select a Branch Source of Git and add your repository’s clone URL to the Project Repository field. If you’re using a private Git repository, you’ll also need to configure your credentials.
  4. Elect to periodically scan the pipeline every minute. This will trigger builds if changes are detected.
  5. Save your changes.

Once you’ve saved your changes, Jenkins will scan your repository for branches containing a Jenkinsfile. The multibranch pipeline job type will generate a unique build for each branch in your repository — later, this will let you treat feature branches differently from the master branch.

Once the indexing is complete, Jenkins will run a build for your master branch. Clicking on the name of the branch will take you the build history for that branch (figure 10.5).

c10_04.png

Figure 10.4 New project configuration screen, showing Branch Sources options

Click the build number and then Console Output. This traces the output of the build. Within that output, you should be able to see how your Jenkinsfile has been executed:

Agent default-q3ccc is provisioned from template Kubernetes Pod Template
Agent specification [Kubernetes Pod Template] (jenkins-jenkins-slave): 
* [jnlp] jenkins/jnlp-slave:3.10-1(resourceRequestCpu: 200m, resourceRequest
➥Memory: 256Mi, resourceLimitCpu: 200m, resourceLimitMemory: 256Mi)

Running on default-q3ccc in /home/jenkins/workspace/market-data_master
➥-27MDVADAYDBX5WJSRWQIFEL3T7GD4LWPU5CXCZNTJ4CKBDLP3LVA
[Pipeline] {
[Pipeline] checkout
Cloning the remote Git repository
Cloning with configured refspecs honoured and without tags
Cloning repository https://github.com/morganjbruce/market-data.git
 > git init /home/jenkins/workspace/market-data_master
[CA}-27MDVADAYDBX5WJSRWQIFEL3T7GD4LWPU5CXCZNTJ4CKBDLP3LVA # timeout=10
Fetching upstream changes from https://github.com/morganjbruce/
➥market-data.git
 > git --version # timeout=10
 > git fetch --no-tags --progress https://github.com/morganjbruce/
➥market-data.git +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/morganjbruce/
➥market-data.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/
➥* # timeout=10
 > git config remote.origin.url https://github.com/morganjbruce/
➥market-data.git # timeout=10
Fetching without tags
Fetching upstream changes from https://github.com/morganjbruce/
➥market-data.git
 > git fetch --no-tags --progress https://github.com/morganjbruce/
➥market-data.git +refs/heads/*:refs/remotes/origin/*
Checking out Revision 80bfb7bdc4fa0b92dcf360393e5d49e0f348b43b (master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 80bfb7bdc4fa0b92dcf360393e5d49e0f348b43b
Commit message: "working through ch10"
First time build. Skipping changelog.
[Pipeline] echo
Latest commit id: 80bfb7bdc4fa0b92dcf360393e5d49e0f348b43b
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // stage
[Pipeline] End of Pipeline
Finished: SUCCESS
c10_05.png

Figure 10.5 Build history for the master branch of your repository

Each [Pipeline] step traces the execution of your code. Awesome — you’ve deployed a build automation tool, configured it against a service repository, and run your first build pipeline! Next, let’s look at the first stage of your pipeline: build.

10.2.2 Building your image

You’re going to use Docker to build and package your images. First, let’s change your Jenkinsfile, as shown in the following listing.

Listing 10.2 Jenkinsfile for build step

def withPod(body) {    ①  
  podTemplate(label: 'pod', serviceAccount: 'jenkins', containers: [
      containerTemplate(name: 'docker', image: 'docker', command: 'cat', 
➥ttyEnabled: true),
      containerTemplate(name: 'kubectl', image: 'morganjbruce/kubectl', 
➥command: 'cat', ttyEnabled: true)
    ],
    volumes: [
      hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: 
➥'/var/run/docker.sock'),
    ]
 ) { body() }
}

withPod {
  node('pod') {    ②  
    def tag = "${env.BRANCH_NAME}.${env.BUILD_NUMBER}"
    def service = "market-data:${tag}"
    checkout scm    ③  
    container('docker') {    ④  
      stage('Build') {    ⑤  
        sh("docker build -t ${service} .")    ⑥  
      }
    }
  }
}

This script will build your service and tag the resulting Docker container with the current build number. It’s definitely more complex than the earlier version, so let’s take a quick walk through what you’re doing:

  1. You define a pod template for your build, which Jenkins will use to create pods on Kubernetes for a build agent. This pod contains two containers — Docker and kubectl.
  2. Within that pod, you check out the latest version of your code from Git.
  3. You then start a new pipeline stage, which you’ve called Build.
  4. Within that stage, you enter the Docker container and run a docker command to build your service image.

Commit this new Jenkinsfile to your Git repo and navigate to the build job on Jenkins. Wait for a rerun — or trigger the job manually — and in the console output, you should see your container image being built successfully.

10.2.3 Running tests

Next, you should run some tests. This should be like any other continuous integration job: if the tests are green, deployment can proceed; if not, you halt the pipeline. At this stage, you aim to provide rapid and accurate feedback on the quality of a changeset. Fast test suites help engineers iterate quickly.

Building your code and performing unit tests are only two of the possible activities you might perform during this commit stage of the build pipeline. Table 10.1 outlines other possibilities.

Table 10.1 Possible activities in the commit stage of a deployment pipeline
ActivityDescription
Unit testsCode-level tests
CompilationCompiling the artifact into an executable artifact
Dependency resolutionResolving external dependencies — for example, open source packages
Static analysisEvaluating code against metrics
LintingChecking syntax and stylistic principles of code

For now, you should get your unit tests running. Add a Test stage to your Jenkinsfile, immediately after the Build stage as shown in the next listing.

Listing 10.3 Test stage

stage('Test') {
  sh("docker run --rm ${service} python setup.py test")
}

Commit your Jenkinsfile and run the build. This will add a new stage to your pipeline, which executes the test cases defined in /tests (figure 10.6).

c10_06.png

Figure 10.6 Your pipeline so far with Build and Test stages

Unfortunately, this code alone won’t make the results visible in the build. Only success or failure will do that. You can archive the XML results you’re generating by adding the following to your Jenskinsfile.

Listing 10.4 Archiving results from test stage

stage('Test') {
  try {
    sh("docker run -v `pwd`:/workspace --rm ${service} 
➥python setup.py test")    ①  
  } finally {
    step([$class: 'JUnitResultArchiver', testResults: 
➥'results.xml'])    ②  
  }
} 

This code mounts the current workspace as a volume within the Docker container. The python test process will write output to that volume as /workspace/result.xml, and you can access those results even after Docker has stopped and removed the service container. You use the tryfinally statement to ensure you achieve results regardless of pass or failure.

Committing your changed Jenkinsfile and running a fresh build will store test results in Jenkins. You can view them on the build page. Great — you’ve validated your underlying code. Now you’re almost ready to deploy.

10.2.4 Publishing artifacts

You need to publish an artifact — in this case, our Docker container image — to be able to deploy it. If you used a private Docker registry in chapter 9, you’ll need to configure your Docker credentials within Jenkins:

  1. Navigate to Credentials > System > Global Credentials > Add Credentials.
  2. Add username and password credentials, using your credentials to https://hub.docker.com.
  3. Set the ID as dockerhub and click OK to save these credentials.

If you’re using a public registry, you can skip this step. Either way, when you’re ready, add a third step to your Jenkinsfile, as follows.

Listing 10.5 Publishing artifacts

def tagToDeploy = "[your-account]/${service}"    ①  

stage('Publish') {
  withDockerRegistry(registry: [credentialsId: 
➥'dockerhub']) {    ②  
    sh("docker tag ${service} ${tagToDeploy}")    ③  
    sh("docker push ${tagToDeploy}")
  }
}

When you have that ready, commit and run your build. Jenkins will publish your container to the public Docker registry.

10.2.5 Deploying to staging

At this point, you’ve tested the service internally but in complete isolation; you haven’t interacted with any of the service’s upstream or downstream collaborators. You could deploy directly to production and hope for the best, but you probably shouldn’t. Instead, you can deploy to a staging environment where you can run further automated and manual tests against real collaborators.

You’re going to use Kubernetes namespaces to logically segregate your staging and production environments. To deploy your service, you’ll use kubectl, using an approach similar to the one you took in chapter 9. Rather than installing the tool on Jenkins, you can use Docker to wrap this command-line tool. This is quite a useful technique.

First, let’s look at your deployment and service definition. You should save the following listing to deploy/staging/market-data.yml within your market-data repo.

Listing 10.6 Deployment specification for market-data

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: market-data
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 50%
      maxSurge: 50%
  template:
    metadata:
      labels:
        app: market-data
        tier: backend
        track: stable
    spec:
      containers:
      - name: market-data
        image: BUILD_TAG    ①  
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 8000
        livenessProbe:
          httpGet:
            path: /ping
            port: 8000
          initialDelaySeconds: 10
          timeoutSeconds: 15
        readinessProbe:
          httpGet:
            path: /ping
            port: 8000
          initialDelaySeconds: 10
          timeoutSeconds: 15

If you saw this in chapter 9, you’ll notice one key difference: you don’t set a specific image tag to deploy, only a placeholder of BUILD_TAG. You’ll replace this in your pipeline with the version you’re deploying. This is a little unsophisticated — as you build more complex deployments, you might want to explore higher level templating tools, such as ksonnet (https://ksonnet.io).

You’ll also want to add market-data-service.yml, as shown in the following listing, to the same location.

Listing 10.7 market-data service definition

---
apiVersion: v1
kind: Service
metadata:
  name: market-data
spec:
  type: NodePort
  selector:
    app: market-data
    tier: backend
  ports:
    - protocol: TCP
      port: 8000
      nodePort: 30623

Before you deploy, create distinct namespaces to segregate your workloads, using kubectl:

kubectl create namespace staging
kubectl create namespace canary
kubectl create namespace production

Now, add a deploy stage to your pipeline, as follows.

Listing 10.8 Deployment to staging (Jenkinsfile)

stage('Deploy') {
  sh("sed -i.bak 's#BUILD_TAG#${tagToDeploy}#' ./
➥deploy/staging/*.yml")    ①  

  container('kubectl') {
    sh("kubectl --namespace=staging apply -f deploy/
➥staging/")    ②  
  }
} 

Again, commit and run the build. This time, a Kubernetes deploy should be triggered! You can check the status of this deployment using kubectl rollout status:

$ kubectl rollout status –n staging deployment/market-data
Waiting for rollout to finish: 2 of 3 updated replicas are available... 
deployment "market-data" successfully rolled out

As you can see, although your build was marked as complete, the deployment itself takes some time to roll out. This is because kubectl apply works asynchronously and doesn’t wait for the cluster to finish updating to reflect the new state. If you like, you can add a call to the above kubectl rollout status method within the Jenkinsfile so that Jenkins waits for rollouts to complete before proceeding.

Either way, once the rollout is complete, you can access this service:

$ curl `minikube service --namespace staging --url market-data`/ping
HTTP/1.0 200 OK
Content-Type: text/plain
Server: Werkzeug/0.12.2 Python/3.6.1

This example service doesn’t do much. For your own services, you might trigger further automated testing or perform further exploratory testing of the service and code changes you’ve just deployed. Table 10.2 outlines some of the activities you might perform at this stage of a deployment pipeline. For now, great work — you’ve automated your first microservice deployment!

Table 10.2 Possible activities to perform to validate a staging release of a microservice
Acceptance testingAutomated testsRunning automated tests to check expectations, either regression or acceptance
Manual testsSome services may require manual validation or exploratory testing.
Nonfunctional testingSecurity testsTesting the security posture of the service
Load/capacity testsValidating expectations about capacity and load on a service

10.2.6 Staging environments

Let’s take a break for a moment to discuss staging environments. You should make any new release of a service to staging first. Microservices need to be tested together, and production isn’t the first place where that should happen.

The infrastructure configuration of your staging environment should be an exact copy of production, albeit with less real traffic. It doesn’t need to run at the same scale. The volume and type of testing you’ll use to put your services through their paces can determine the necessary size. As well as conducting various types of automated testing, you might manually validate services in staging to ensure they meet acceptance criteria.

Along with shared staging environments, you might also run isolated staging environments for individual or small sets of closely related services. Unlike full staging, these environments might be ephemeral and spun up on-demand for the duration of testing. This is useful for testing a feature in relative isolation, with tighter control of the state of the environment. Figure 10.7 compares these different approaches to staging environments.

Although staging environments are crucial, they can be hard to manage in a microservice application, as well as the source of significant contention between teams. A microservice might have many dependencies, all of which should be present and stable in full staging. Although a service in staging will have passed testing, code review, and other quality gates, it’s still possible that services in staging will be less stable than their production equivalents, and that can cause chaos. Any engineer deploying to a shared environment needs to act as a good neighbor to ensure that issues with services they own don’t substantially impact another team’s ability to smoothly test (and therefore deliver) other services.

c10_07.png

Figure 10.7 Isolated versus full staging environments

10.2.7 Deploying to production

You can use what you’ve learned so far to take this service to production. Table 10.3 outlines some of the different actions you might perform at this stage of your pipeline.

Table 10.3 Possible activities to perform in deployment
Code deploymentDeploying code to a runtime environment
RollbackRolling back code to a previous version, if errors or unexpected behavior occurs
Smoke testsValidating the behavior of a system using light-touch tests

In this case, if a deployment to staging is successful, here’s what should happen next:

  1. Your pipeline should wait for human approval to proceed to production.
  2. When you have approval, you’ll release a canary instance first. This helps you validate that your new build is stable when it faces real production traffic.
  3. If you’re happy with the performance of the canary instance, the pipeline can proceed to deploy the remaining instances to production.
  4. If you’re not happy, you can roll back your canary instance.

First, you should add an approval stage. In continuous delivery — unlike continuous deployment — you don’t necessarily want to push every commit immediately to production. Add the following to your Jenkinsfile.

Listing 10.9 Approving a production release

stage('Approve release?') {
  input message: "Release ${tagToDeploy} to production?"
}

Running this code in Jenkins will show a dialog box in the build pipeline view, with two options: Proceed or Abort. Clicking Abort will cancel the build; clicking Proceed will, for now, cause the build to finish successfully — you haven’t added a deploy step!

First, try a production deploy without a canary instance. Copy the YAML files you created earlier, from listings 10.6 and 10.7, to a new deploy/production directory. Feel free to increase the number of replicas you’ll deploy.

Next, add the code in the next listing to your Jenkinsfile, after the approval stage. This is similar to the code you used in staging. Don’t worry about the code duplication for now — you can work on that in a moment.

Listing 10.10 Production release stage

stage('Deploy to production') {
  sh("sed -i.bak 's#BUILD_TAG#${tagToDeploy}#' ./deploy/production/*.yml")

  container('kubectl') {
    sh("kubectl --namespace=production apply -f deploy/production/")
  }
}

As always, commit and run the build in Jenkins. If successful, you’ve released to production! Let’s take this a few steps further and add some code to release a canary instance. But before you add a new stage, let’s DRY up your code a little bit. You can move your release-related code into a separate file called deploy.groovy, as shown in the following listing.

Listing 10.11 deploy.groovy

def toKubernetes(tagToDeploy, namespace, 
➥deploymentName) {    ①  
  sh("sed -i.bak 's#BUILD_TAG#${tagToDeploy}#' ./deploy/${namespace}/*.yml")

  container('kubectl') {
    kubectl("apply -f deploy/${namespace}/")
  }
}
def kubectl(namespace, command) {    ①  
  sh("kubectl --namespace=${namespace} ${command}")    ②  
} 

def rollback(deploymentName) {
  kubectl("rollout undo deployment/${deploymentName}")
}

return this;

Then you can load it in your Jenkinsfile, as shown in the following listing.

Listing 10.12 Using deploy.groovy in your Jenkinsfile

def deploy = load('deploy.groovy')

stage('Deploy to staging') {
  deploy.toKubernetes(tagToDeploy, 'staging', 'market-data')
}

stage('Approve release?') {
  input "Release ${tagToDeploy} to production?"
}

stage('Deploy to production') {
  deploy.toKubernetes(tagToDeploy, 'production', 'market-data')
}

That’s much cleaner. This isn’t the only way to reuse pipeline code — we’ll discuss a better approach in section 10.3.

Next, create a canary deployment file. If you’ve read through chapter 9, you’ll remember that you use a distinct deployment with unique labels to identify this instance. In deploy/canary, create a deployment YAML file like the one you used for production but with three changes:

  1. Add a label track: canary to the pod specification.
  2. Reduce the number of replicas to 1.
  3. Change the name of the deployment to market-data-canary.

After you’ve added that file, add a new stage to your deployment, as shown in the following listing, before releasing to production.

Listing 10.13 Canary release stage (Jenkinsfile)

stage('Deploy canary') {
  deploy.toKubernetes(tagToDeploy, 'canary', 'market-data-canary')

  try {
    input message: "Continue releasing ${tagToDeploy} to 
➥production?"    ①  
  } catch (Exception e) {
    deploy.rollback('market-data-canary')    ②  
  }
}

In this example, we’re assuming human approval for moving from canary to production. In the real world, this might be an automated decision; for example, you could write code to monitor key metrics, such as error rate, for some time after a canary deploy.

Once you’ve committed this code, you should be able to run the whole pipeline. Figure 10.8 illustrates the full journey of your code to production.

Let’s take a breather so you can reflect on what you’ve learned:

  • You’ve automated the delivery of code from commit to production by using Jenkins to build a structured deployment pipeline.
  • You’ve built different stages to validate the quality of that code and provide appropriate feedback to an engineering team.
  • You’ve learned about the importance of — and challenges in operating — a staging environment when developing microservices.

These techniques provide a consistent and reliable foundation for delivering code safely and rapidly to production. This helps ensure overall stability and robustness in a microservice application. But it’s far from ideal if every microservice copies and pastes the same deployment code or reinvents the wheel for every new service. In the next section, we’ll discuss patterns for making deployment approaches reusable across a fleet of services.

c10_08.png

Figure 10.8 Successful deployment pipeline from commit to production release

10.3 Building reusable pipeline steps

Microservices enable independence and technical homogeneity, but these advantages come at a cost:

  • It’s harder for developers to move between teams, as the tech stack can vastly differ.
  • It’s more complex for engineers to reason through the behavior of different services.
  • You have to invest more time in different implementations of the same concerns, such as deployment, logging, and monitoring.
  • People may make technical decisions in isolation, risking local, rather than global, optimization.

To balance these risks while maintaining technical freedom and flexibility, you should aggressively standardize the platform and tooling that services operate on. Doing so will ensure that, even if the technology stack changes, common abstractions remain as close as possible across different services. Figure 10.9 illustrates this approach.

c10_09.png

Figure 10.9 You can standardize many elements of a microservice application to reduce complexity, increase reuse, and reduce ongoing operational cost.

Over the past few chapters, you’ve applied this thinking in a few areas:

  • Using a microservice chassis to abstract common, nonbusiness logic functionality, such as monitoring and service discovery
  • Using containers — with Docker — as a standardized service artifact for deployment
  • Using a container scheduler — Kubernetes — as a common deployment platform

You also can apply this approach to your deployment pipelines.

10.3.1 Procedural versus declarative build pipelines

The pipeline scripts you’ve written so far have three weaknesses:

  1. Specific — They’re tied to a single repository, and another repository can’t share them.
  2. Procedural — They explicitly describe how you want the build to be carried out.
  3. Don’t abstract internals — They assume a lot of knowledge about Jenkins itself, such as how you start nodes, run commands, and use command-line tools.

Ideally, a service deployment pipeline should be declarative: an engineer describes what they expect to happen (test my service, release it, and so on), and your framework decides how to execute those steps. This approach also abstracts away changes to how those steps happen: if you want to tweak how a step works, you can change the underlying framework implementation. Abstracting these implementation decisions away from individual services leads to greater consistency across the microservice application.

Compare the following script to the Jenkinsfile you wrote earlier in the chapter.

Listing 10.14 Example declarative build pipeline

service {
  name('market-data')

  stages {
    build()
    test(command: 'python setup.py test', results: 'results.xml')
    publish()
    deploy()
  }
}

This script defines some common configuration (service name) and a series of steps (build, test, publish, deploy) but hides the complexity of executing those steps from a service developer. This allows any engineer to quickly follow best practice to reliably and rapidly take a new service to production.

With Jenkins Pipeline, you can implement declarative pipelines using shared libraries. We won’t go into detail in this chapter — not enough pages left! — but this book’s Github repository includes an example pipeline library (http://mng.bz/P7hD). In addition, the Jenkins documentation (http://mng.bz/p3wz) provides a detailed reference on using shared libraries.

10.4 Techniques for low-impact deployment and feature release

Throughout the past few chapters, we’ve used the terms deployment and release interchangeably. But in a microservice application, it’s important to distinguish between the technical activity of deployment — updating the software version running in a production environment — and the decision to release a new feature to customers or consuming services.

You can use two techniques — dark launches and feature flags — to complement your continuous delivery pipeline. These techniques will allow you to deploy new features without impacting customers and provide a flexible mechanism for rollback.

10.4.1 Dark launches

Dark launching is the practice of deploying a service to a production environment significantly prior to making it available to consumers. At our company, we practice this regularly and try to deploy within the first few days of building a new service, regardless of whether it’s feature-complete. Doing this allows us to perform exploratory testing from an early stage, which helps us understand how a service behaves and makes a new service visible to our internal collaborators.

In addition, dark launching to a production environment allows you to test your services against real production traffic. Let’s say that SimpleBank wants to offer a new financial prediction algorithm as a service. By passing production traffic in parallel with the existing service, they can easily benchmark the new algorithm and understand how it performs in the real world, rather than against limited and artificial test scenarios (figure 10.10).

Whether you validate this output manually or automatically depends on the nature of the feature and the volume and distribution of requests required to adequately exhaust possible scenarios. The dark launch approach is also useful for testing that refactoring doesn’t regress sensitive functionality.1 

c10_10.png

Figure 10.10 Dark launches enable validation of new service behavior against real production traffic without exposing features to customers.

10.4.2 Feature flags

Feature flags control the availability of features to customers. Unlike dark launches, you can use them at any point in the lifecycle of a service, such as a feature release. A feature flag (or toggle) wraps a feature in conditional logic, only enabling it for a certain set of users. Many companies will use them to control rollout; for example, only releasing a feature for internal staff first, or progressively increasing the number of users who can access a feature over time.

Several libraries are available to implement feature flags, such as Flipper (http://github.com/jnunemaker/flipper) or Togglz (http://github.com/togglz/togglz). These libraries typically use a persistent backing store, like Redis, to maintain the state of feature flags for an application. In a larger microservice application, you may find it desirable to have a single feature store to synchronize the rollout of features that involve the interaction of multiple services, rather than independently managing features per service. Figure 10.11 illustrates these different approaches.

Managing features per service is likely to be easier in a small microservice system than a larger one. As your system becomes larger, centralizing feature configuration in a single service reduces coordination overhead if you encounter situations where feature rollouts necessitate changes in multiple microservices.

c10_11.png

Figure 10.11 You can store feature flags centrally — owned by one service — or maintain them in separate applications.

By controlling which users see a change, feature flags can aid in minimizing the potential impact of any change to a system, as you have partial control over code execution and feature availability. If errors occur, feature flags often allow for more rapid recovery than typical rollback. For microservices, they can enable safer release of new functionality without adversely affecting service consumers.

Summary

  • A microservice deployment process should meet two goals: safety at pace and consistency.
  • The time it takes to deploy a new service is often a barrier in microservice applications.
  • Continuous delivery is an ideal deployment practice for microservices, reducing risk through the rapid delivery of small, validated changesets.
  • A good continuous delivery pipeline ensures visibility, correctness, and rich feedback to an engineering team.
  • Jenkins is a popular build automation tool that uses a scripting language to tie multiple tools together into a delivery pipeline.
  • Staging environments are invaluable but can be challenging to maintain when they face a high volume of independent change.
  • You can reuse declarative pipeline steps across multiple services; aggressive standardization makes deployment predictable across teams.
  • To provide fine-grained control over rollout and rollback, you should manage the technical activity of deployment separately from the business activity of releasing a feature.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset