Chapter 8. Automate Deployment

There should be two tasks for a human being to perform to deploy software into a development, test, or production environment: to pick the version and environment and to press the “deploy” button.

Jez Humble and David Farley in Continuous Delivery

Best Practice:

  • Use appropriate tools to automate deployment to various environments.

  • Keep track of deployment times.

  • This improves the development process because no time is wasted on manual deployment tasks or errors caused by manual work.

Once your DTAP street is under control, including automated tests that are kicked off in the development pipeline by a CI server, a further step for automation is automated deployment. In its general sense, here we mean automatically transferring code from one environment to the next one. Because this saves time for manual configuration steps, deployment times to new environments may drop from hours to minutes.

Consider the difference with Continuous Integration: CI is about automating builds and the tests that go with it, within the environment. Automated deployment is about automation of the deployment to each environment, typically when integration (including tests) has succeeded on the previous environment.

Motivation

When comparing automated with manual deployment, automated deployment is more reliable, faster, more flexible, and simplifies root cause analysis.

Automated Deployment Is Reliable

Ideally, deployment automation requires no manual intervention at all. In that case, once a build is finished and fully tested, the code is automatically pushed to the next environment. Such a process is typically controlled by a CI server. Because this is automated and repeatable, it minimizes the amount of (manual) mistakes and errors.

Automated Deployment Is Fast and Efficient

Automated processes run faster than manual ones, and when they need to be executed repeatedly, the initial investment to automate them is quickly earned back. Typically, deployment is a specialization of deployment/infrastructure experts who know their way around different environments. There is always “that one person” who knows how to do it. When this process is automated, such expertise is only needed for significant changes (in the deployment scripts) and root cause analysis of problems.

Automated Deployment Is Flexible

By extension of its reliability and efficiency, automated deployment makes a system more portable (the ability to move a system to another environment). This is relevant because systems may change infrastructure multiple times in their lifecycle (mainly the production environment, that is). Causes are various but often they are based on considerations of scalability (dealing with temporary high loads), transfer of a hosting contract to a new party, cost reductions, and strategic decisions (moving infrastructure to the cloud or outsourcing it altogether).

Automated Deployment Simplifies Root Cause Analysis

Given that automation makes for a reliable (or at least consistent) process, finding the cause of problems is easier. This is because it excludes human error. Causes are then more likely to be found in changed circumstances (e.g., different frameworks, data, configurations) that somehow have rendered the automation process outdated.

How to Apply the Best Practice

In this section, we discuss specifics on how to achieve automated deployment. The main principle behind this is that you need to think ahead about what steps need to be performed in the pipeline, and what that requires (e.g., in terms of configuration).

You will need at least the following controls:

Define your environments clearly

Define for each environment its goals and intended usage. An inconsistent or unstable environment may disrupt your deployment process and makes root cause analysis difficult. It is preferable to use virtual environments that are stateless in the sense that they clean themselves up after use. By “reinstalling” them after use, you do not need to make assumptions about their configuration. See Chapter 5 for the separation of different environments.

Define all necessary steps

What special needs does the system have for deployment? Are different versions run in production? If so, how do they vary? Use such assumptions and invariants for tests that check whether configuration and deployment are set up as intended.

Get your configuration right

The more uniform development environments are, the better. Script configuration defaults in a uniform manner, so that an environment can be built/rebuilt quickly without human intervention. Your tooling should include provisioning functionality that checks (third-party code) dependencies and installs their appropriate versions.

Make sure you can do rollbacks early

In case of unexpected issues in production, such as performance issues or security bugs, you need to be able to quickly revert to an earlier version of your system. Also allow for a rollback during the process—for example, when the deployment process freezes and you want to return to the begin state.

Use deployment-specialized tooling

Tooling will help you supervise and configure deployment in a consistent manner (e.g., Chef). When portability to other environments has special interest, tooling that “containerizes” your application is especially useful (e.g., Docker). Such tooling can package deployment configurations, required software components, and other dependencies in a way that testing in different environments can be done consistently. This would benefit you if you intend to use, for example, different operating systems or different kinds of servers.

Measuring the Deployment Process

With deployment automation, you would expect that deployment times between environments will decrease. By extension you can also expect overall delivery time to decrease (from development to production). With added reliability you would also expect a decrease in the frequency and time to analyze deployment errors. Of main interest is then knowing whether deployment automation is helping you to achieve improved productivity and reliability. Then consider the following GQM model:

  • Goal A: Understand the impact of automated deployment on development effort and reliability of the deployment process.

    • Question 1: How much time are we gaining by automating deployment?

      • Metric 1a: Difference between (former) average time spent and current time spent on deployment from acceptance (after a manual or automatic “go”) to production. Expect an upward trend in the beginning because of the investment in writing deployment scripts, testing them, and setting up tooling. Then you should expect a downward trend as the scripts and process become more solid and dependable. If the savings are marginal, consider where most manual work is still required (e.g., troubleshooting, fine-tuning, error analysis) and focus on their automation.

      • Metric 1b: Amount of time it takes for code to move from development to production (as measured by the CI server/deployment tooling). When all tests (including acceptance tests) are automated, the gains in time will be apparent when you compare this metric with a process in which each step needs manual intervention. Expect a downward trend over time as the team gains experience with automating and less manual work is needed.

      • Metric 1c: Percentage of overall development time spent on deployment. By automating, expect the percentage to decrease over time. If absolute delivery time decreases together with this percentage (thus, the relative effort for deployment for development as a whole), you can assume you have implemented automated deployment well.

Consider a situation in which the average deployment time (the baseline) has been identified at 8 hours of effort and you want deployment time on average to stay below the baseline. A trend report may then look like Figure 8-1.

bmso 0801
Figure 8-1. Deployment time after automation (in week 10, deployment scripts are ready and only minor fixes need to be performed)
  • Goal A (continued):

    • Question 2: Is our deployment process more reliable after automating it?

      • Metric 2a: Number of deployment issues. Expect this number to drop. If you went from manual to automated deployment, typically you may expect a significant decrease in the number of bugs arising during deployment, simply because human error is much less of an issue. If this change is not visible, your deployment process may not be stable yet. That asks for issue analysis. It is most probable that the deployment scripts are yet unstable (faulty) or insufficiently tested. Deeper issues could be that the production environment is inconsistent over time or is inconsistent compared to test environments. You would probably have identified that before because it will probably distort other measurements as well.

      • Metric 2b: Amount of time spent on solving deployment bugs. Expect a downward trend line. This metric should follow the trend of the previous metric: having fewer issues means less time fixing them. If this is not the case, then bugs apparently are harder to fix when they appear. Then you should investigate why that is the case.

Common Objections to Deployment Automation Metrics

Common objections to deployment automation (and its measurement) are that it creates unnecessary overhead, that automated deployment to production is prohibited, or that the developer’s technology of choice cannot be automatically deployed.

Objection: Single Platform Deployment Does Not Need Automation

“We only deploy on a single platform so there is no gain in automating our deployment process.”

While generally it is easier to deploy on a single platform than on several platforms, the benefits of deployment automation still hold. Automated deployment will still be faster than manual deployment, it makes issue analysis easier (more structured), and allows for flexibility such as switching deployment platforms. Just like code itself, platforms requirements will change over time, though not that often. Consider deployment platform updates, new operating systems, switches to mobile applications, cloud hosting, 32-bit versus 64-bit systems, and so on.

Objection: Time Spent on Fixing Deployment Issues Is Increasing

“After automating deployment, we now spend more time on fixing deployment bugs!”

Automating deployment requires an elaborate investment. Time spent on fixing deployment issues is initially part of the deal. Gain experience by doing it more often. That experience will ease the process and solidify the scripts. This is one of the reasons why releases should be done regularly and frequently, given that a sufficient width and depth of tests add certainty on the system’s behavior.

Objection: We Are Not Allowed to Deploy in Production By Ourselves

“For security reasons, we cannot deploy new releases of our software without permission of our user representatives.”

This objection concerns the last step in the development pipeline in which code is pushed to production. However, the advantages of automating the pushes between development, test, and acceptance environments should still hold. When your team and the user representatives gain positive experience with the effects of deployment automation, eventually this last push may also be (allowed to be) more automated.

In practice we see that some large organizations use acceptance and production environments that are controlled by different parties or departments. These environments typically require strict requests for changes, due to concerns that changes hurt their KPIs such as uptime and number of incidents. Although deployment automation and test automation should help manage those feared consequences, facing such a situation may be out of the span of control of the development team.

Objection: No Need to Automate Because of Infrequent Releases

“We only release a few times a year so there is little to gain by automating deployment.”

Clearly, this should not be the case when you are practicing Agile development. We often hear this argument with legacy systems, though. Those are the archetypes of “anti-continuous-deployment.” This situation is likely due to a combination of technology/system properties, available expertise, and constrained resources. The more infrequent your deployments, the more likely that they are difficult, requiring expertise, and thus expensive. Due to these costs, the business may not be willing to pay (“releases are really expensive, which is why we only have three of them”), which reinforces the cycle.

The “right solution” is to do it more often, to “diminish the pain of the experience.” But when technical or organizational constraints force you to release only a few times a year, you may have bigger problems to solve in your development process.

Objection: Automating Deployment Is Too Costly

“It is very hard to get all the people and resources together to automate the deployment.”

It could be the case that in your development you are dealing with multiple disparate systems that require manual steps for deployment. This is a particular type of complexity in which each system may require specific knowledge or skills. Automation of those steps and therefore of the process can seem costly in terms of resources. But it should also be evident that this complexity can benefit greatly from automation. Also, most systems can be automated at least partially. Those parts should be your initial focus to reap the immediate benefits.

Metrics Overview

As a recap, Table 8-1 shows an overview of the metrics discussed in this chapter, with their corresponding goals.

Table 8-1. Summary of metrics and goals in this chapter
Metric # in text Metric description Corresponding goal

AD 1a

Deployment time acceptance to production compared with baseline

Deployment effort and reliability

AD 1b

Deployment time to production

Deployment effort and reliability

AD 1c

Percentage of time spent on deployment

Deployment effort and reliability

AD 2a

Number of deployment issues

Deployment effort and reliability

AD 2b

Time spent on fixing deployment issues

Deployment effort and reliability

This is the last chapter on automation of the development process. After you have automated testing, integration, and deployment, you are right on track to deliver software (almost) continuously. In the subsequent chapters, we will look at best practices in software development from an organizational perspective. This includes practices for standardization (Chapter 9), managing usage of third-party code (Chapter 10), and proper yet minimal documentation (Chapter 11).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset