Chapter 7
Deployment as Code

THE AWS CERTIFIED DEVELOPER – ASSOCIATE EXAM TOPICS COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1: Deployment
  • check mark 1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns.
  • check mark 1.3 Prepare the application deployment package to be deployed to AWS.
  • check mark 1.4 Deploy serverless applications.
  • Domain 3: Development with AWS services
  • check mark 3.4 Write code that interacts with AWS services by using Application Programming Interfaces (APIs), Software Development Kits (SDKs), and AWS Command Line Interface (CLI).

Introduction to AWS Code Services

In the previous chapter, you learned about deploying code packages to AWS Elastic Beanstalk. This is a great way to migrate existing applications to highly available, fault-tolerant infrastructure. As your experience with Amazon Web Services (AWS) deployment increases over time, you may find a need to customize your deployment workflow further than what is supported within a single service. AWS provides a number of deployment services designed for flexibility, empowering customers with complex infrastructure and application deployment requirements.

This chapter introduces the AWS “Code” services. These services are responsible for creating the foundation of a repeatable application, infrastructure, and configuration deployment process. As each service is explained, you will see how they fit into an “enterprise as code” philosophy. You use this approach with each aspect of an enterprise to deploy, configure, and maintain over time via versioned code. (This includes the process to deploy code.) The primary components of an enterprise as code are application, infrastructure, and configuration, though you can take advantage of many more, such as monitoring, compliance, and audit practices.

Continuous Delivery with AWS CodePipeline

The AWS “Code” services lay the foundation to deploy different parts of an enterprise starting from a source repository. You start with AWS CodePipeline to create a continuous integration/continuous deployment pipeline (CI/CD) that integrates various sources, tests, deployments, or other components. AWS CodePipeline implements AWS CodeCommit as a source in that it acts as the initialization point of your deployment process. AWS CodeBuild allows you to pull code and packages from various sources to create publishable build artifacts. Lastly, AWS CodeDeploy allows you to deploy compiled artifacts to infrastructure in your environment. AWS CodePipeline is not limited to deploying application code; it can also be used to provision, configure, and manage infrastructure.

In a fully realized enterprise as code, a single commit to a source repository can kick off processes, such as those shown in Figure 7.1.

The figure shows a branch view of continuous delivery with AWS codepipeline.

Figure 7.1 Branch view

Benefits of Continuous Delivery

Organizations can realize a number of benefits from automating the process of testing and preparing software changes. First, there is reduced manual effort required to ensure code changes are tested prior to release. By automating tests, they are consistently run against every change made to a code repository.

Second, developers are no longer tasked with completing steps other than checking in code changes. After the change has been pushed to a source repository, initiation of the build/test process automatically begins. This allows the developers to focus on what they do best: develop software.

Third, the fact that changes are tested immediately after check-in ensures that more bugs are caught earlier in the development process. If bugs are not caught soon, the effort and cost to remediate the errors increases the further they make it in the release process.

Lastly, continuous delivery ensures that quality changes are delivered faster. This increases quality with decreased time to market. So, before you start considering storage options, take time to evaluate your data and decide which of these dimensions your data falls under. This will help you decide what type of storage is best for your data.

Using AWS CodePipeline to Automate Deployments

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define. This enables you to deliver features and updates rapidly and reliably. You can easily build an end-to-end solution with prebuilt plugins for popular third-party services like GitHub, or you can integrate your own custom plugins into any stage of your release process. With AWS CodePipeline, you pay only for what you use. There are no up-front fees or long-term commitments.

What Is AWS CodePipeline?

AWS CodePipeline is the underpinning of CI/CD processes in AWS. Because you define your delivery workflow as a set of stages and actions, multiple changes can be run simultaneously through the same set of processing steps every time. In Figure 7.2, the developer team is responsible for committing changes to a source repository. AWS CodePipeline automatically detects and moves into the source stage. The code change (revision) passes to the build stage, where changes are built into a package or product ready for deployment. A staging deployment is done where users can manually review the functionality that the changes introduce or modify. Before final production release, an authorized user provides a manual approval. After production release, further code changes can reliably pass through the same pipeline.

The figure shows an example of AWS CodePipeline workflow.

Figure 7.2 AWS CodePipeline workflow

AWS CodePipeline provides a number of built-in integrations to other AWS services, such as AWS CloudFormation, AWS CodeBuild, AWS CodeCommit, AWS CodeDeploy, Amazon Elastic Container Service (ECS), Elastic Beanstalk, AWS Lambda, AWS OpsWorks Stacks, and Amazon Simple Storage Service (Amazon S3). Some partner tools include GitHub (https://github.com) and Jenkins (https://jenkins.io). Customers also have the ability to create their own integrations, which provides a great degree of flexibility.

You define workflow steps through a visual editor within the AWS Management Console or via a JavaScript Object Notation (JSON) structure for use in the AWS CLI or AWS SDKs. Access to create and manage release workflows is controlled by AWS Identity and Access Management (IAM). You can grant users fine-grained permissions, controlling what actions they can perform and on which workflows.

AWS CodePipeline provides a dashboard where you can review real-time progress of revisions, attempt to retry failed actions, and review version information about revisions that pass through the pipeline.

AWS CodePipeline Concepts

There are a number of different components that make up AWS CodePipeline and the workflows (pipelines) created by customers. Figure 7.3 displays the AWS CodePipeline concepts.

The figure shows the AWS CodePipeline concepts.

Figure 7.3 Pipeline structure

Pipeline

A pipeline is the overall workflow that defines what transformations software changes will undergo.

Symbol of Note You cannot change the name of a pipeline. If you would like to change the name, you must create a new pipeline.

Revision

A revision is the work item that passes through a pipeline. It can be a change to your source code or data stored in AWS CodeCommit or GitHub or a change to the version of an archive in Amazon S3. A pipeline can have multiple revisions flowing through it at the same time, but a single stage can process one revision at a time. A revision is immediately picked up by a source action when a change is detected in the source itself (such as a commit to an AWS CodeCommit repository).

Symbol of Note If you use Amazon S3 as a source action, you must enable versioning on the bucket.

Details of the most recent revision to pass through a stage are kept within the stage itself and are accessible from the console or AWS CLI. To see the last revision that was passed through a source stage, for example, you can select the revision details at the bottom of the stage, as shown in Figure 7.4.

The figure shows a screenshot illustrating a source stage.

Figure 7.4 Source stage

Depending on the source type (Amazon S3, AWS CodeCommit, or GitHub), additional information will be accessible from the revision details pane (such as a link to the commit on https://github.com), as shown in Figure 7.5.

The figure shows a screenshot illustrating revision details.

Figure 7.5 Revision details

Stage

A stage is a group of one or more actions. Each stage must have a unique name. Should any one action in a stage fail, the entire stage fails for this revision.

Action

An action defines the work to perform on the revision. You can configure pipeline actions to run in series or in parallel. If all actions in a stage complete successfully for a revision, it passes to the next stage in the pipeline. However, if one action fails in the stage, the revision will not pass further through the pipeline. At this point, the stage that contains the failed action can be retried for the same revision. Otherwise, a new revision is able to pass through the stage.

Symbol of NoteA pipeline must have two or more stages. The first stage includes one or more source actions only. Only the first stage may include source actions.

Symbol of Warning Every action in the same stage must have a unique name.

Source

The source action defines the location where you store and update source files. Modifications to files in a source repository or archive trigger deployments to a pipeline. AWS CodePipeline supports these sources for your pipeline:

  • Amazon S3
  • AWS CodeCommit
  • GitHub

Symbol of NoteA single pipeline can contain multiple source actions. If a change is detected in one of the sources, all source actions will be invoked.

To use GitHub as a source provider for AWS CodePipeline, you must authenticate to GitHub when you create a pipeline. You provide GitHub credentials to authorize AWS CodePipeline to connect to GitHub to list and view repositories accessible by the authenticating account. For this link, AWS recommends that you create a service account user so that the lifecycle of personal accounts is not tied to the link between AWS CodePipeline and GitHub.

After you authenticate GitHub, a link is created between AWS CodePipeline for this AWS region and GitHub. This allows IAM users to list repositories and branches accessible by the authenticated GitHub user.

Build

You use a build action to define tasks such as compiling source code, running unit tests, and performing other tasks that produce output artifacts for later use in your pipeline. For example, you can use a build stage to import large assets that are not part of a source bundle into the artifact to deploy it to Amazon Elastic Compute Cloud (Amazon EC2) instances. AWS CodePipeline supports the integrations for the following build actions:

  • AWS CodeBuild
  • CloudBees
  • Jenkins
  • Solano CI
  • TeamCity

Test

You can use test actions to run various tests against source and compiled code, such as lint or syntax tests on source code, and unit tests on compiled, running applications. AWS CodePipeline supports the following test integrations:

  • AWS CodeBuild
  • BlazeMeter
  • Ghost Inspector
  • Hewlett Packard Enterprise (HPE) StormRunner Load
  • Nouvola
  • Runscope

Deploy

The deploy action is responsible for taking compiled or prepared assets and installing them on instances, on-premises servers, serverless functions, or deploying and updating infrastructure using AWS CloudFormation templates. The following services are supported as deploy actions:

  • AWS CloudFormation
  • AWS CodeDeploy
  • Amazon Elastic Container Service
  • AWS Elastic Beanstalk
  • OpsWorks Stacks
  • Xebia Labs

Approval

An approval action is a manual gate that controls whether a revision can proceed to the next stage in a pipeline. Further progress by a revision is halted until a manual approval by an IAM user or IAM role occurs.

Symbol of Note Specifically, the codepipeline:PutApprovalResult action must be included in the IAM policy.

Upon approval, AWS CodePipeline approves the revision to proceed to the next stage in the pipeline. However, if the revision is not approved (rejected or the approval expires), the change halts and will stop progress through the pipeline. The purpose of this action is to allow manual review of the code or other quality assurance tasks prior to moving further down the pipeline.

Symbol of Note Approval actions cannot occur within source stages.

You must approve actions manually within seven days; otherwise, AWS CodePipeline rejects the code. When an approval action rejects, the outcome is equivalent to when the stage fails. You can retire the action, which initiates the approval process again. Approval actions provide several options that you can use to provide additional information about what you choose to approve.

Publish approval notifications Amazon Simple Notification Service (Amazon SNS) sends notices to one or more targets that approval is pending.

Specify a Universal Resource Locator (URL) for review You can include a URL in the approval action notification, for example, to review a website published to a fleet of test instances.

Enter comments for approvers You can add additional comments in the notifications for the reviewer’s reference.

Invoke

You can customize the invoke action within AWS CodePipeline if you leverage the power and flexibility of AWS Lambda. Invoke actions execute AWS Lambda functions, which allows arbitrary code to be run as part of the pipeline execution. Uses for custom actions in your pipeline can include the following:

  • Backing up data volumes, Amazon S3 buckets, or databases
  • Interacting with third-party products, such as posting messages to Slack channels
  • Running through test interactions with deployed web applications, such as executing a test transaction on a shopping site
  • Updating IAM Roles to allow permissions to newly created resources

Symbol of Tip When you deploy changes to multiple AWS Elastic Beanstalk environments, for example, you can use AWS Lambda to invoke a stage to swap the environment CNAMEs (SwapEnvironmentCNAMEs). This effectively implements blue/green deployments via AWS CodePipeline.

Artifact

Artifacts are actions that act on a file or set of files. Artifacts can pass between actions and stages in a pipeline to provide a final result or version of the files. For example, an artifact that passes from a build action would deploy to Amazon EC2 during a deploy action.

Symbol of Note Multiple actions in a single pipeline cannot output artifacts with the same name.

Every stage makes use of the Amazon S3 artifact bucket that you define when you create the pipeline. Depending on the type of action(s) in the stage, AWS CodePipeline will package the output artifact. For example, the output artifact of a source action would be an archive (.zip) containing the repository contents, which would then act as the input artifact to a build action.

For an artifact to transition between stages successfully, you must provide unique input and output artifact names. In Figure 7.6, the output artifact name for the source action must match the input artifact for the corresponding build action.

The figure shows an example of artifact transition.

Figure 7.6 Artifact transition

Transition

Transitions connect stages in a pipeline and define which stages should transition to one another. When all actions in a stage complete successfully, the revision passes to the next stage(s) in the pipeline.

You can manually disable transitions, which stops all revisions in the pipeline once they complete the preceding stage (successfully or unsuccessfully). Once you enable the transition again, the most recent successful revision resumes. Other previous successful revisions will not resume through the pipeline at this time. This concept also applies to stages that are not yet available by the time the next revision completes. If more than one revision completes while the next stage is unavailable, they will be batched. This means that the most current revision will continue through the pipeline once the next stage becomes available.

Managing Approval Actions

Approval actions halt further progress through a pipeline until an authorized IAM user or IAM rule approves the transition. You can use approvals to review changes manually before final release into production, or as a code review step.

Figure 7.7 shows a pipeline with three stages: Source, Staging, and LambdaStage. The Source stage contains a source action referencing an Amazon S3 bucket. The source action has already completed and passed the source artifact to Staging. In Staging, the deploy action deploys the source artifact to Amazon EC2 with AWS CodeDeploy. If this action completes successfully, the LambdaStage stage begins, which also deploys to Amazon EC2 via AWS CodeDeploy.

The figure shows a screenshot illustrating a pipeline with three stages: Source, Staging, and LambdaStage.

Figure 7.7 Full pipeline

AWS CodePipeline Service Limits

Table 7.1 lists the AWS CodePipeline service limits.

Table 7.1 AWS CodePipeline Service Limits

Limit Value
Pipelines per region

US East (N. Virginia) (us-east-1): 40

US West (Oregon) (us-west-2): 60

EU (Ireland) (eu-west-1): 60

Other supported regions: 20

Stages per pipeline

Minimum: 2

Maximum: 10

Actions per stage

Minimum: 1

Maximum: 20

Parallel actions per stage Maximum: 10
Sequential actions per stage Maximum: 10
Maximum artifact size

Amazon S3 source: 2 GB

AWS CodeCommit source: 1 GB

GitHub source: 1 GB

Symbol of Warning When you deploy to AWS CloudFormation, the maximum artifact size is 256 MB.

AWS CodePipeline Tasks

The remainder of this section will focus on the tasks you need to build and execute a simple pipeline and how to outline the requirements to build cross-account pipelines. This concept is particularly important for organizations that have multiple AWS accounts, especially when you separate environments across accounts (such as Account A for development, Account B for Quality Assurance [QA], and Account C for production), as AWS CodePipeline will need access to resources in each account to automate deployments successfully.

Symbol of Note Before you start the next steps, make sure that you have an IAM user with an access key and secret access key and that the user has sufficient AWS CodePipeline permissions.

Create an AWS CodePipeline

It is best to name your pipeline something meaningful, such as Dev_S3_Bucket. After you select a source provider (Amazon S3, AWS CodeCommit, or GitHub), you must enter a full object path. This corresponds to the .zip archive that will be tracked for changes.

When you select Amazon S3, AWS CodePipeline creates an Amazon CloudWatch Events rule, IAM role, and AWS CloudTrail trail. These are the default methods that notify AWS CodePipeline of changes to the source archive. You can also use AWS CodePipeline to check regularly for changes. This, however, will provide a slower update experience.

You select AWS CodeBuild, Jenkins, or Solano CI for a build provider.

Symbol of Note The Jenkins build provider requires you to install the AWS CodePipeline plugin for Jenkins on the server.

The Solano CI build provider requires authentication to GitHub with a valid user. After authenticating to GitHub, you must authenticate to Solano CI.

If you do not select a build provider, you must select a deployment provider (if you select a build provider, the deployment step is optional). This option is useful if you desire the pipeline execution to be a finished build artifact, such as the case with custom media transcoding with AWS CodeBuild. The available providers for the deployment stage are Amazon ECS, AWS CloudFormation, AWS CodeDeploy, AWS Elastic Beanstalk, and AWS OpsWorks Stacks.

AWS Elastic Beanstalk allows customers to automate deployment of application archives to one or more Amazon EC2 instances. It also handles health checks, load balancing, log gathering, and other important tasks automatically. Since it requires a bundled application archive to upload to instances for deployment, it is a natural fit for AWS CodePipeline, which provides artifacts as archives. To deploy to AWS Elastic Beanstalk from AWS CodePipeline, simply provide the application and environment name.

Symbol of Note For deployment to AWS Elastic Beanstalk, the maximum application archive size is 512 MB. The deployment artifact must not exceed this size, or the deployment will fail.

You select a service role for AWS CodePipeline to access AWS resources within your account. You can select an existing IAM role or create a new role.

Symbol of Note You can only select IAM roles with a trust policy that allows AWS CodePipeline to assume them.

Start a Pipeline

After you create a pipeline, the first stage updates the source repository or archive, and then the pipeline will automatically begin execution. To rerun the pipeline for the most recent revision, select Release Change in the AWS CodePipeline console, or invoke the aws codepipeline start-pipeline-execution AWS CLI command.

aws codepipeline start-pipeline-execution --name SamplePipeline
Retry a Failed Action

If a pipeline action fails for any reason, you can retry that action on the same revision in the console or use the aws codepipeline retry-stage-execution AWS CLI command. However, there are certain situations where a failed action may become ineligible for retries.

  • The pipeline itself has changed after the action failed.
  • Other actions in the same stage have not completed.
  • The retry attempt is already in progress.

Create a Cross-Account Pipeline

In some architectures, environments may be spread across two or more AWS accounts. You can implement a single CI/CD workflow with AWS CodePipeline that interacts with resources in multiple AWS accounts.

Symbol of Note If an organization has separate accounts for development, test, and production workloads, you can leverage one pipeline to deploy to resources in all three. To do so, you must create and shard several components between accounts.

Symbol of Note A source action of Amazon S3 cannot reference buckets in accounts other than the pipeline account.

Pipeline Account Steps

In the following steps, the account that contains the pipeline will be the pipeline account. The account to deploy resources will be the target account.

  1. Create an AWS Key Management Service (AWS KMS) key in the pipeline account, and apply it to the pipeline. This key encrypts artifacts that pass between stages, and you configure it to allow access to the target account in a later step. After you create the AWS KMS key, you apply a key policy that allows access to the key by both the AWS CodePipeline service role in the pipeline account and the Amazon Resource Name (ARN) of the target account.
  2. Apply a bucket policy to the Amazon S3 bucket for the pipeline. This policy must grant access to the bucket by the target account.
  3. Create a policy that allows the pipeline account to assume a role in the target account. You attach this policy to the AWS CodePipeline service role.
Target Account Steps

Symbol of Note If you deploy revisions to Amazon EC2 instances (as with AWS CodeDeploy), you apply a policy to the instance role that allows access to the Amazon S3 bucket that the AWS CodePipeline uses in the pipeline account. Additionally, the instance role must also have a policy that allows access to the AWS KMS key.

  1. Create an IAM role that contains a trust relationship policy that allows the pipeline account to assume the role.
  2. Create an IAM policy that allows access to deploy to the pipeline’s resources. Attach this policy to the IAM role.
  3. Create an IAM policy that allows access to the Amazon S3 bucket in the pipeline account, and attach it to the IAM role. After completing the previous steps, revisions that pass through the pipeline account will be accessible by the target account.

Using AWS CodeCommit as a Source Repository

AWS CodeCommit is a fully managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. AWS CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure. You can use AWS CodeCommit to store anything securely, from source code to binaries, and it works seamlessly with your existing Git tools.

What Is AWS CodeCommit?

Before any activities can occur to deploy applications, you must first have a location where you can store and version application code in a reliable fashion. AWS CodeCommit is a cloud-based, highly available, and redundant version control service. AWS CodeCommit leverages the Git framework, and it is fully compatible with existing tooling. There are a number of benefits to this service, such as the following:

  • Automatic encryption in-transit and at rest.
  • Scaling to handle rapid release cycles and large repositories.
  • Access control to the repository using IAM users, IAM roles, and IAM policies.
  • Hypertext Transfer Protocol Secure (HTTPS) and Secure Shell (SSH) connectivity.

However, the biggest benefit of AWS CodeCommit is built-in integration with multiple other AWS services, like AWS CodePipeline. With these integrations, AWS CodeCommit acts as the initial step to automate application code releases.

AWS CodeCommit Concepts

This section details the concepts behind AWS CodeCommit.

Credentials

When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you request. AWS uses the security credentials to authenticate and authorize your requests.

HTTPS

HTTPS connectivity to a Git-based repository requires a username and password, which pass to the repository as part of a request. To use AWS CodeCommit with HTTPS credentials, you must first add them to an IAM user with sufficient permissions to interact with the repository. To create Git credentials for your IAM user, you open the IAM console, and select the user who will need to authenticate to the AWS CodeCommit repository via HTTPS.

AWS generates security credentials for the usernames and passwords, and they cannot be set to custom values.

Symbol of Warning Make sure to download or copy the credentials because the password will be lost after you close the success window.

After you configure your Git CLI/application to use the repository’s HTTPS endpoint and the username/password, you will have access to the AWS CodeCommit repository.

SSH

With SSH authentication, there is no need to install the AWS CLI to connect to your repository. However, you perform some additional configuration tasks.

  • Your IAM user must have the ability to manage their own SSH keys. To accomplish this, you add the IAMUserSSHKeys managed policy to the account.
  • Scaling to handle rapid release cycles and large repositories.
  • For Windows users, install a bash emulator, such as Git Bash.

To configure SSH authentication to AWS CodeCommit repositories, follow these steps:

  1. In the IAM console, select the user account you want to modify.
  2. Upload the public SSH key on the Security Credentials tab.
  3. Copy the SSH key identity (ID). This follows the form APKAEIGHANK3EXAMPLE.
  4. Update the ~/.ssh/config file on your workstation to include these contents:
    Host git-codecommit.*.amazonaws.com
    User YOUR_SSH_KEY_ID
    IdentityFile YOUR_PRIVATE_KEY_FILE
  5. To verify the configuration, test a simple SSH connection to the AWS CodeCommit endpoint, as shown here:
    # Format: ssh git-codecommit.[REGION_CODE].amazonaws.com
    ssh git-codecommit.us-east-1.amazonaws.com
Use the Credential Helper

The previous HTTPS and SSH authentication methods both rely on additional credentials aside from IAM access/secret keys. It is also possible to authenticate to AWS CodeCommit with IAM credentials and the AWS CodeCommit credential helper. The credential helper translates IAM credentials to those that AWS CodeCommit can use to perform Git actions, such as to clone a repository or merge a pull request. To configure the credential helper on your workstation, do the following:

  1. Install and configure the AWS CLI.
  2. Install Git.
  3. Configure Git to leverage the credential helper from the AWS CLI with these commands:
    git config --global 
    credential.helper '!aws codecommit credential-helper $@'
     
    git config --global credential.UseHttpPath true

Once complete, HTTPS interactions with the AWS CodeCommit repository should work as expected.

Symbol of Note The credential helper authentication method is the only one available for root and federated IAM Users.

Development Tools and Integrated Development Environment

AWS CodeCommit integrates automatically with any development tools that support IAM credentials. Additionally, after you set up HTTPS Git credentials, you are able to use any tools that support this authentication mechanism instead. Examples of supported integrated development environment (IDE) include the following:

  • AWS Cloud9
  • Eclipse
  • IntelliJ
  • Visual Studio

Repository

A repository (repo) is the foundation of AWS CodeCommit. This is the location where you store source code files, track revisions, and merge contributions (commits). When you create a repository, it will contain an empty master branch by default. To configure additional branches and commit code changes, you connect the repository to a local workstation where changes can be made before you upload or push them.

Symbol of Note Repository names must be unique within an individual AWS account; however, you can change them without re-creating the repository. When you change a repository name, you need to update any local copies of the repository to have their remote point to the new HTTPS or SSH URL with the git remote add command.

Repository Notifications

AWS CodeCommit supports triggers via Amazon SNS, which you can use to leverage other AWS services for post-commit actions, such as firing a webhook with AWS Lambda after a commit is pushed to a development branch. To implement this, AWS CodeCommit uses AWS CloudWatch Events. You create event rules that trigger for each of the event types that you select in AWS CodeCommit. Event types that will fire notifications include the following:

  • Pull Request Update Events
  • Create a Pull Request
  • Close a Pull Request
  • Update Code in a Pull Request
  • Title or Description Changes
  • Pull Request Comment Events
  • Commit Comment Events
  • Comments on Code Changes
  • Comments on Files in a Commit
  • Comments on the Commit Itself

Symbol of Note If you change the name of the repository through the AWS CLI or SDK, the notifications cease to function. (This behavior is not present when you change names in the AWS Management Console.) To restore lost notifications, delete the settings and configure them a second time.

Repository Triggers

Repository triggers are not the same as notifications, as the events that fire each differ greatly. Use repository triggers to send notifications to Amazon SNS or AWS Lambda during these events:

  • Push to Existing Branch
  • Create a Branch or Tag
  • Delete a Branch or Tag

Triggers are similar in functionality to webhooks used by other Git providers, like GitHub. You can use triggers to perform automated tasks such as to start external builds, to notify administrators of code pushes, or to perform unit tests. There are some restrictions on how to configure triggers.

  • The trigger destination, Amazon SNS or AWS Lambda, must exist in the same AWS region as the repository.
  • If the destination is Amazon SNS in another AWS account, the Amazon SNS topic must have a policy that allows notifications from the account that contains the repository.

Cross-Account Access to a Different Account

In some situations, the repository that contains the application source code may be located in a separate AWS account from the IAM user/role attempting to access it. In these situations, there are several steps that you must perform in the repository account and the user account.

Repository Account Actions
  1. Create a policy for access to the repository. This policy should allow users in the user account to access one or more specific repositories, as well as (optionally) to view a list of all repositories.
  2. Attach this policy to a role in the same account, and allow users in the user account to assume this role.
User Account Actions
  1. Create an IAM user or IAM group. This user or group will be able to access the repository after the next step.
  2. Assign a policy to the user or group that allows them to assume the role created in the repository account as part of the previous steps.

Once these steps are complete, the IAM user will first need to assume the cross-account role before you attempt to clone or otherwise access the repository. You adjust the AWS credentials file ~/.aws/config (Linux/macOS) or drive:Usersusername.awsconfig (Windows). A profile will be added to this config file that specifies the cross-account role to assume.

[profile MyCrossAccountProfile]
region = US East (Ohio)
role_arn=arn:aws:iam:111122223333:role/MyCrossAccountContributorRole
source_profile=default

Lastly, you need to modify the AWS CLI credential helper so that you use MyCrossAccountProfile.

git config --global credential.helper 
  '!aws codecommit credential-helper --profile MyCrossAccountProfile $@'

From this point, the IAM user in the user account will be able to clone and interact with the repository in the repository account.

Files

A file is a piece of data that is subject to version control by AWS CodeCommit. AWS CodeCommit tracks any modifications made to this file on a per-line level. You use the Git client to push changes in a file to the repository, where it tracks against other changes in previous commits.

Pull Requests

Pull requests are the primary vehicle on which you review and merge code changes between branches. Unlike branch merging, pull requests allow multiple users to comment on changes before they merge with the destination branch. The typical workflow of a pull request is as follows:

  1. Create a new branch off the default branch for the feature or bug fix.
  2. Make changes to the branch files, commit, and push to the remote repository.
  3. Create a pull request for the changes to integrate them with the default branch, as shown in Figure 7.8.
  4. Other users can review the changes in the pull request and provide comments, as shown in Figure 7.9.
  5. You can push any additional changes from user feedback to the same branch to include them in the pull request.
  6. Once all reviewers provide approval, the pull request merges into the default branch and closes. You can close pull requests when you merge the branches locally or when you close the request via the AWS CodeCommit console or the AWS CLI.

The figure shows a screenshot illustrating how to create a pull request for the changes to integrate them with the default branch.

Figure 7.8 Creating a pull request

The figure shows a screenshot illustrating how other users can review the changes in the pull request and provide comments.

Figure 7.9 Reviewing changes

Commits

Commits are point-in-time changes to contents of files in a repository. A commit is not a new copy of the file, but it is instead a way to track changes in the line(s) in a file, by whom, and when. When you push a commit to the repository, AWS CodeCommit tracks the following file changes:

  • Author Name
  • Author Email
  • Commit Message

Commits to a repository in AWS CodeCommit can be made in one of two ways. The most common workflow is to use the Git CLI and update the repository using git push. The AWS CLI supports the aws codecommit put-file action, which allows you to update a file on the repository with a local copy and specify a branch, parent commit, and message.

aws codecommit put-file --repository-name MyDemoRepo 
  --branch-name feature-branch 
  --file-content file://MyDirectory/ExampleFile.txt 
  --file-path /solutions/ExampleFile.txt 
  --parent-commit-id 11112222EXAMPLE 
  --name "Developer" 
  --email  
  --commit-message "Fixed a bug”

The AWS CodeCommit console supports viewing differences between commits. To view differences between a commit and its parent, open the Commits pane on the repository dashboard and then select the commit ID, as shown in Figure 7.10.

The figure shows a screenshot illustrating how to select the commit ID.

Figure 7.10 Selecting the commit ID

After doing so, you can view changes in this commit either side by side (Split view) or in the same pane (Unified view), as shown in Figure 7.11.

The figure shows a screenshot illustrating how to view changes in the commit ID either side by side (Split view) or in the same pane (Unified view).

Figure 7.11 Split view

You can also view differences between arbitrary commit IDs in the same repository. In the Compare window of the repository dashboard, you can choose two commit IDs for comparison, as shown in Figure 7.12.

The figure shows a screenshot illustrating how to select and compare two commit IDs.

Figure 7.12 Select and compare

After you select two commit IDs, click the Compare button. This will provide a similar split or unified view of changes.

Branches

Branches are ways to separate and organize groups of commits. This allows developers to organize work in a meaningful fashion, separating changes into logical groups based on the feature or bug-fix being developed. For example, as you can see in Figure 7.13, a single repository may have branches for each environment: dev, test, and prod. Or, individual features and bug fixes can have separate branches.

The figure shows a screenshot illustrating how to view branches.

Figure 7.13 Branch view

A default branch is the base when you clone the repository. When you clone a repository to your local machine, the default branch (such as “master” or “prod”) clones. You cannot delete the default branch until a new branch is set as default, or you will delete the entire repository.

Symbol of Note You can change the default branch for a repository, but first the new default branch must exist in the remote repository.

Migrate to AWS CodeCommit

This section details how to migrate a Git repository, unversioned files, or another repository type, and it is important when you migrate a high volume of or large files.

Migrate a Git Repository

You use an AWS CodeCommit to migrate from a Git repository, as shown in Figure 7.14.

The figure shows how to use an AWS CodeCommit to migrate from a Git repository.

Figure 7.14 Migrating from a Git repository

The first step is to create the AWS CodeCommit repository (via either the AWS Management Console or the AWS CLI or AWS SDK). After you create the repository, clone the project to a local workstation. To push this repository to AWS CodeCommit, set the repository’s remote to the AWS CodeCommit repository’s HTTPS or SSH URL.

git push 
  https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MyClonedRepository 
  --all

If you need to push any tags to the new repository, run the following code:

git push 
  https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MyClonedRepository 
  --tags
Migrate Unversioned Content

You can migrate any local or unversioned content to AWS CodeCommit in a similar manner, as if the content exists in another Git-based repository service. The primary difference is that you set up a new repository instead of cloning an existing one to migrate. Refer to Figure 7.15.

The figure shows how to migrate unversioned content.

Figure 7.15 Migrating unversioned content

First create the AWS CodeCommit repository (either via the AWS Management Console or via the AWS CLI or AWS SDK). Next, create a local directory with the files to migrate, and run git init from the command line or terminal in that directory. This will initialize the directory to work with Git so that any file changes are tracked. After the directory initializes, run git add . to add all current files to Git. Run git commit -m 'Initial Commit' to generate a commit. Lastly, push the commit to AWS CodeCommit with git push https://git-codecommit.us-east-2.amazonaws.com/v1/repos/MyFirstRepo --all.

Migrate Incrementally

For large repositories, you can migrate in incremental steps and push many smaller files. This prevents any network issues that may cause the entire push to fail. If any smaller commit fails, it is a trivial matter to restart it when you compare it to a single, monolithic commit.

Additionally, when you push large repositories, AWS recommends that you use SSH over HTTPS, as there is a chance that the HTTPS connection may terminate because of various network or firewall issues.

AWS CodeCommit Service Limits

AWS CodeCommit enforces the service limits in Table 7.2. An asterisk (*) indicates limits that require you to submit a request to AWS Support to increase the limits.

Table 7.2 AWS CodeCommit Service Limits

Limit Value
Repositories per account* 1,000
References per push 4,000
Triggers per repository 10
Git blob size 2 GB

Using AWS CodeCommit with AWS CodePipeline

You can use AWS CodeCommit as a source action in your pipeline. This allows you to utilize a highly available, redundant version control system as the initialization point of your CI/CD pipeline.

When you select AWS CodeCommit as the source provider, you must provide a repository name and branch. If you use AWS CodeCommit, it creates an Amazon CloudWatch Events rule and an IAM role to monitor the repository and branch for changes, as shown in Figure 7.16.

The figure shows a screenshot illustrating the source location.

Figure 7.16 Source location

One issue that can arise is if you store large binary files. Because of the system Git uses to track file changes, Git creates a new copy of every modification to a binary file within the repository. Over time, this can cause repositories to grow rapidly in size. Instead of storing binary files in AWS CodeCommit, add an additional Amazon S3 source action to the pipeline. If you store large binary files in Amazon S3, you can reduce the overall cost and development time because of the reduction of time it takes to push/pull commits. Since Amazon S3 already supports versioning (and requires it for use with AWS CodePipeline), changes to binary objects will still be tracked so that rollbacks are straightforward.

Using AWS CodeBuild to Create Build Artifacts

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With AWS CodeBuild, you do not need to provision, manage, and scale your own build servers. AWS CodeBuild scales continuously and processes multiple builds concurrently, so your builds do not wait in a queue. AWS CodeBuild has prepackaged build environments, or you can create custom build environments that use your own build tools. With AWS CodeBuild, AWS charges by the minute for the compute resources you use.

What Is AWS CodeBuild?

AWS CodeBuild enables you to define the build environment to perform build tasks and the actual tasks that it will perform. AWS CodeBuild comes with prepackaged build environments for most common workloads and build tools (Apache Maven, Grade, and others), and it allows you to create custom environments for any custom tools or processes.

AWS CodePipeline includes built-in integration with AWS CodeBuild, which can act as a provider for any build or test actions in your pipeline, as shown in Figure 7.17.

The figure shows how to use AWS CodeBuild in AWS CodePipeline.

Figure 7.17 Using AWS CodeBuild in AWS CodePipeline

AWS CodeBuild Concepts

AWS CodeBuild initiates build tasks inside a build project, which defines the environmental settings, build steps to perform, and any output artifacts. The build container’s operating system, runtime, and build tools make up the build environment.

Build Projects

Build projects define all aspects of a build. This includes the environment in which to perform builds, any tools to include in the environment, the actual build steps to perform, and outputs to save.

Create a Build Project

When you create a build project, you first select a source provider. AWS CodeBuild supports AWS CodeCommit, Amazon S3, GitHub, and BitBucket as source providers. When you use GitHub or BitBucket, a separate authentication flow will be invoked. This allows access to the source repository from AWS CodeBuild. GitHub source repositories also support webhooks to trigger builds automatically any time you push a commit to a specific repository and branch.

After AWS CodeBuild successfully connects to the source repository or location, you select a build environment. AWS CodeBuild provides preconfigured build environments for some operating systems, runtimes, and runtime versions, such as Ubuntu with Java 9.

Next, you will configure the build specification. This can be done in one of two ways. You can insert build commands in the console or specify a buildspec.yml file in your source code. Both options are valid, but if you use a buildspec.yml file, you will see additional configuration options.

If your build creates artifacts you would like to use in later steps of your pipeline/process, you can specify output artifacts to save to Amazon S3. Otherwise, you can choose not to save any artifacts. You will need to specify individual filename(s) for AWS CodeBuild to save on your behalf.

AWS CodeBuild supports caching, which you can configure in the next step. Caching saves some components of the build environment to reduce the time to create environments when you submit build jobs.

Every build project requires an IAM service role that is accessible by AWS CodeBuild. When you create new projects, you can automatically create a service role that you restrict to this project only. You can update service roles to work with up to 10 build projects at a time.

Lastly, you can configure AWS CodeBuild to create build environments with connectivity to an Amazon Virtual Private Cloud (Amazon VPC) in your account. To do so, specify the Amazon VPC ID, subnets, and security groups to assign to the build environment. You can configure other settings when you create the build, such as to run the Docker daemon in privileged mode to build Docker images.

After you set the build project properties, you can select the compute type (memory and vCPU settings), any environment variables to pass to the build container, and tags to apply to the project.

Symbol of Note When you set environment variables, they will be visible in plain text in the AWS CodeBuild console and AWS CLI or SDK. If there is sensitive information that you would like to pass to build jobs, consider using the AWS Systems Manager Parameter Store. This will require the build project’s IAM role to have permissions to access the parameter store.

Build Specification (buildspec.yml)

The buildspec.yml file can provide the build specification to your build projects in the AWS CodeBuild console, the AWS CLI, or the AWS SDK when you create the build project, or as part of your source repository in a YAML-formatted buildspec.yml file. You can supply only one build specification to a build project. A build specification’s format is as follows:

version: 0.2
 
env:
  variables:
    key: "value"
  parameter-store:
    key: "value"
            
phases:
  install:
    commands:
      - command
  pre_build:
    commands:
      - command
  build:
    commands:
      - command
  post_build:
    commands:
      - command
artifacts:
  files:
    - location
  discard-paths: yes
  base-directory: location
cache:
  paths:
    - path
Version

AWS supports multiple build specification versions; however, AWS recommends you use the latest version whenever possible.

Environment Variables (env)

You can add optional environment variables to build jobs. Any key/value pairs that you provide in the variables section are available as environment variables in plain text.

Symbol of Note Any environment variables that you define here will overwrite those you define elsewhere in the build project, such as those in the container itself or by Docker.

The parameter-store mapping specifies parameters to query in AWS Systems Manager Parameter Store.

Phases

The phases mapping specifies commands to run at each stage of the build job. When you specify build settings in the AWS CodeBuild console, AWS CLI, or AWS SDK, you are not able to separate commands into phases. With a build specifications file, you can separate commands into phases.

install Commands to execute during installation of the build environment.

pre_build Commands to be run before the build begins.

build Commands to be run during the build.

post_build Commands to be run after the build completes.

Symbol of Note If a command fails in any stage, subsequent stages will not run.

Artifacts

The artifacts mapping specifies where AWS CodeBuild will place output artifacts, if any. This is required only if your build job produces actual outputs. For example, unit tests would not produce output artifacts for later use in a pipeline. The files list specifies individual files in the build environment that will act as output artifacts. You can specify individual files, directories, or recursive directories. You can use discard-paths and base-directory to specify a different directory structure to package output artifacts.

Cache

If you configure caching for the build project, the cache map specifies which files to upload to Amazon S3 for use in subsequent builds.

Build Project Cache

This example sets the JAVA_HOME and LOGIN_PASSWORD environment variables (the latter is retrieved from AWS Systems Manager Parameter Store), installs updates in the build environment, runs a Maven installation, and saves the .jar output to Amazon S3 as a build artifact. For future builds, the content of the /root/.m2 directory (and any subdirectories) is cached to Amazon S3.

version: 0.2
 
env:
  variables:
    JAVA_HOME: "/usr/lib/jvm/java-8-openjdk-amd64"
  parameter-store:
    LOGIN_PASSWORD: "dockerLoginPassword"
 
phases:
  install:
    commands:
      - echo Entered the install phase...
      - apt-get update -y
      - apt-get install -y maven
  pre_build:
    commands:
      - echo Entered the pre_build phase...
      - docker login –u User –p $LOGIN_PASSWORD
  build:
    commands:
      - echo Entered the build phase...
      - echo Build started on 'date'
      - mvn install
  post_build:
    commands:
      - echo Entered the post_build phase...
      - echo Build completed on 'date'
artifacts:
  files:
    - target/messageUtil-1.0.jar
  discard-paths: yes
cache:
  paths:
    - '/root/.m2/**/*'

Build Environments

A build environment is a Docker image with a preconfigured operating system, programming language runtime, and any other tools that AWS CodeBuild uses to perform build tasks and communicate with the service, along with other metadata for the environment, such as the compute settings. AWS CodeBuild maintains its own repository of preconfigured build environments. If these environments do not meet your requirements, you can use public Docker Hub images. Alternatively, you can use container images in Amazon Elastic Container Registry (Amazon ECR).

AWS CodeBuild Environments

AWS CodeBuild provides build environments for Ubuntu and Amazon Linux operating systems, and it supports the following:

  • Android
  • Docker
  • Golang
  • Java
  • Node.js
  • PHP
  • Python
  • Ruby
  • .NET Core

Symbol of Note Not all programming languages support both Ubuntu and Amazon Linux build environments.

Compute Types

Table 7.3 lists the memory, virtual central processing unit (vCPU), and disk space configurations for build environments.

Table 7.3 Compute Configurations for Build Environments

Compute Type Memory vCPUs Disk Space
BUILD_GENERAL1_SMALL 3 GB 2 64 GB
BUILD_GENERAL1_MEDIUM 7 GB 4 128 GB
BUILD_GENERAL1_LARGE 15 GB 8 128 GB
Environment Variables

AWS CodeBuild provides several environment variables by default, such as AWS_REGION, CODEBUILD_BUILD_ID, and HOME.

Symbol of Note When you create your own environment variables, AWS CodeBuild reserves the CODEBUILD_ prefix.

Builds

When you initiate a build, AWS CodeBuild copies the input artifact(s) into the build environment. AWS CodeBuild uses the build specification to run the build process, which includes any steps to perform and outputs to provide after the build completes. Build logs are made available to Amazon CloudWatch Logs for real-time monitoring.

When you run builds manually in the AWS CodeBuild console, AWS CLI, or AWS SDK, you have the option to change several properties before you run a build job.

  • Source version (Amazon S3)
  • Source branch, version, and Git clone depth (AWS CodeCommit, GitHub, and Bitbucket)
  • Output artifact type, name, or location
  • Build timeout
  • Environment variables

AWS CodeBuild Service Limits

AWS CodeBuild enforces service limits in Table 7.4. An asterisk (*) indicates that you can increase limits if you submit a request to AWS Support.

Table 7.4 AWS CodeBuild Service Limits

Limit Value
Build projects per region per account* 1,000
Build timeout 8 hours
Concurrently running builds* 20

Using AWS CodeBuild with AWS CodePipeline

AWS CodePipeline enables you to build jobs for both build and test actions. Both action types require exactly one input artifact and may return zero or one output artifacts. When you create a build or test actions in your pipeline with your build projects, the only input that you require is the build project name. The AWS CodePipeline console also has the option to create new build projects when you create the action, as shown in Figure 7.18.

The figure shows a screenshot illustrating how to create new build projects when creating the action.

Figure 7.18 Build provider

Using AWS CodeDeploy to Deploy Applications

AWS CodeDeploy is a service that automates software deployments to a variety of compute services, such as Amazon EC2, AWS Lambda, and instances running on-premises. AWS CodeDeploy makes it easier for you to release new features rapidly, helps you avoid downtime through application deployment, and handles the complexity to update your applications. You can use AWS CodeDeploy to automate software deployments and eliminate the need for error-prone manual operations. The service scales to match your deployment needs, from a single AWS Lambda function to thousands of Amazon EC2 instances.

What Is AWS CodeDeploy?

AWS CodeDeploy standardizes and automates deployments of any types of content or configuration to Amazon EC2 instances, on-premises servers, or AWS Lambda functions. Because of its flexibility, it is not restricted to deploy only application code, and it can perform various administrative tasks that are part of your deployment process. Additionally, you can create custom deployment configurations tailored to your specific infrastructure needs.

AWS CodeDeploy and NGINX

You can install and enable NGINX as part of a deployment of configuration files to reverse proxy instances. The service itself does not involve any changes to your current source code, and it only requires you to install a lightweight agent on any managed instances or on-premises servers.

Should deployments fail in your environment, you can configure AWS CodeDeploy with a predetermined failure tolerance. Once this tolerance is breached, deployment will automatically roll back to the last version that works.

You can automate deployment of AWS CodeDeploy with AWS Lambda functions through traffic switching. When updates to functions deploy, AWS CodeDeploy will create new versions of each updated function and gradually route requests from the previous version to the updated function. AWS Lambda functions also support custom deployment configurations, which can specify the rate and percentage of traffic to switch.

AWS CodeDeploy Concepts

When you deploy to Amazon EC2 on-premises instances, a revision occurs.

Revision

A revision is an artifact that contains both application files to deploy and an AppSpec configuration file. Application files can include compiled libraries, configuration files, installation packages, static media, and other content. The AppSpec file specifies what steps AWS CodeDeploy will follow when it performs deployments of an individual revision.

A revision must contain any source files and scripts to execute on the target instance inside a root directory. Within this root directory, the appspec.yml file must exist at the topmost level and not in any subfolders.

/tmp/ or c:	emp (root folder)
  |--content (subfolder)
  |    |--myTextFile.txt
  |    |--mySourceFile.rb
  |    |--myExecutableFile.exe
  |    |--myInstallerFile.msi
  |    |--myPackage.rpm
  |    |--myImageFile.png
  |--scripts (subfolder)
  |    |--myShellScript.sh
  |    |--myBatchScript.bat
  |    |--myPowerShellScript.ps1
  |--appspec.yml

When you deploy to AWS Lambda, a revision contains only the AppSpec file. It contains information about the functions to deploy, as well as the steps to validate that the deployment was successful.

In either case, when a code revision is ready to deploy, you package it into an archive file and store it in one of these three repositories:

  • Amazon S3
  • GitHub
  • Bitbucket

When you use GitHub or Bitbucket, the source code does not need to be a .zip archive, as AWS CodeDeploy will package the repository contents on your behalf. Amazon S3, however, requires a .zip archive file.

Symbol of Note AWS Lambda deployments support only Amazon S3 buckets as a source repository.

Deployments

A deployment is the process of copying content and executing scripts on instances in your deployment group. To accomplish this, AWS CodeDeploy performs the tasks outlined in the AppSpec configuration file. For both Amazon EC2 on-premises instances and AWS Lambda functions, the deployment succeeds or fails based on whether individual AppSpec tasks complete successfully. There are two types of deployments supported by AWS CodeDeploy: in-place and blue/green.

In-Place Deployments

In in-place deployments, revisions deploy to new infrastructure instead of an existing one. After deployment completes successfully, the new infrastructure gradually replaces old code in a phased rollout. After all traffic routes to the new infrastructure, you can keep the old code for review or discard it.

Symbol of Note On-premises instances do not support blue/green deployments.

Blue/Green Deployments

When you deploy to AWS Lambda functions, blue/green deployments publish new versions of each function, after which traffic shifting routes requests to the new function versions according to the deployment configuration that you define.

Stop Deployments

You can stop deployments via the AWS CodeDeploy console or AWS CLI. If you stop deployments to Amazon EC2 on-premises instances, this can result in some deployment groups being left in an undesired deployment state. For example, when you deploy to instances in an Auto Scaling group, if you stop the deployment, it may result in some instances having different application versions. In situations where this occurs, you can configure the application to roll back to the last valid deployment automatically. To do this, you submit a new deployment to the instances with the previous revision, and they appear as a new deployment in the console.

Symbol of Note Some instances that fail the most recent deployment may still have scripts run or files placed that are part of the failed deployment. If you configure automatic rollbacks, AWS CodeDeploy will attempt to remove any successfully created files.

Rollbacks

AWS CodeDeploy achieves automatic rollbacks by redeploying the last working revision to any instances in the deployment group (this will generate a new deployment ID). If you do not configure automatic rollbacks for the application, you can perform a manual rollback by redeploying a previous revision as a new deployment. This will accomplish the same result as an automatic rollback.

During the rollback process, AWS CodeDeploy will attempt to remove any file(s) that were created on the instance during the failed deployment. A record of the created files is kept in the location on your instances.

  • Linux: /opt/codedeploy-agent/deployment-root/deployment-instructions/[deployment-group-id]-cleanup
  • Windows: C:ProgramDataAmazonCodeDeploydeployment-instructions [deployment-group-id]-cleanup

The AWS CodeDeploy agent that runs on the instance will reference this cleanup file as a record of what files were created during the last deployment.

Symbol of Note By default, AWS CodeDeploy will not overwrite any files that were not created as part of a deployment. You can override this setting for new deployments.

AWS CodeDeploy tracks cleanup files; however, script executions are not tracked. Any configuration or modification to the instance that is done by scripts run on your instance cannot be rolled back automatically by AWS CodeDeploy. As an administrator, you will be responsible for implementing logic in your deployment scripts to ensure that the desired state is reached during deployments and rollbacks.

Test Deployments Locally

If you would like to test whether a revision will successfully deploy to an instance you are able to access, you can use the codedeploy-local command in the AWS CodeDeploy agent. This command will search the execution path for an AppSpec file and any content to deploy. If this is found, the agent will attempt a deployment on the instance and provide feedback on the results. This provides a useful alternative to executing the full workflow when you want simply to validate the deployment package.

The following example command attempts to perform a local deployment of an archive file located in Amazon S3:

codedeploy-local --bundle-location s3://mybucket/bundle.tgz --type tgz

Symbol of Note The codedeploy-local command requires the AWS CodeDeploy agent that you install on the instance or on-premises server where you execute the command.

Deployment Group

A deployment group designates the Amazon EC2 on-premises instances that a revision deploys. When you deploy to AWS Lambda functions, this specifies what functions will deploy new versions. Deployment groups also specify alarms that trigger automatic rollbacks after a specified number or percentage of instances, or functions fail their deployment.

For Amazon EC2 on-premises deployments, you can add instances to a deployment group based on tag name/value pairs or Amazon EC2 Auto Scaling group names. An individual application can have one or more deployment groups defined. This allows you to separate groups of instances into environments so that changes can be progressively rolled out and tested before going to production. You can identify instances by individual tags or tag groups. If an instance matches one or more tags in a tag group, it is associated with the deployment group. If you would like to require that an instance match multiple tags, each tag must be in a separate tag group. A single deployment group supports up to 10 tags in up to three tag groups.

In Figure 7.19, if tags Environment, Region, and Type are present in tag groups 1, 2, and 3 respectively, then instances must have at least one tag in each tag group to identify with the deployment group.

The figure shows a screenshot illustrating how to select instances with multiple tags.

Figure 7.19 Selecting instances with multiple tags

When you create deployment groups, you can also configure the following:

Amazon SNS notifications Any recipients that subscribe to the topic will receive notifications when deployment events occur. You must create the topic before you configure this notification, and the AWS CodeDeploy service role must have permission to publish messages to the topic.

Amazon CloudWatch alarms You can configure alarms to trigger cancellation and rollback of deployments whenever the metric has passed a certain threshold. For example, you could configure an alarm to trigger when CPU utilization exceeds a certain percentage for instances in an AWS Auto Scaling group. If this alarm triggers, the deployment automatically rolls back. For AWS Lambda deployments, you can configure alarms to monitor function invocation errors.

Automatic rollbacks You can configure rollbacks to initiate automatically when a deployment fails or based on Amazon CloudWatch alarms. To test deployments, you can disable automatic rollbacks when you create a new deployment.

On-Premises Instances

You can host instances for an Amazon EC2 on-premises deployment group in either an AWS account or your own data center. To configure an on-premises instance to work with AWS CodeDeploy, you must complete several tasks. Before you begin, you need to ensure that the instance has the ability to communicate with AWS CodeDeploy service endpoints over HTTPS (port 443). You will also need to create an IAM user that the instance assumes and has permissions to interact with AWS CodeDeploy.

  1. Install the AWS CLI on the instance.
  2. Configure the AWS CLI with an IAM user. Call the aws configure command, and specify the secret key ID and secret access key of the IAM user.
  3. Register the instance with AWS CodeDeploy. Call the aws codedeploy register AWS CLI command from the on-premises instance. Provide a unique name with the --instance-name property. When you execute this command, include an IAM user to associate with the instance and tags to apply.
    aws deploy register --instance-name AssetTag12010298EX 
    --iam-user-arn arn:aws:iam::8039EXAMPLE:user/CodeDeployUser-OnPrem 
    --tags Key=Name,Value=CodeDeloyDemo-OnPrem 
    --region us-west-2
  4. Register the instance with AWS CodeDeploy. Install the AWS CodeDeploy agent. Run the aws codedeploy install AWS CLI command. By default, it will install a basic configuration file with preconfigured settings. If you would like to override this, you can provide your own configuration file with the --config-file parameter. If you specify the --override-config parameter, this will override the current configuration file on the instance.
    aws deploy install --override-config 
    --config-file /tmp/codedeploy.onpremises.yml 
    --region us-west-2

After you complete the previous steps, the instance will be available for deployments to the deployment group(s).

Deploy to Amazon EC2 Auto Scaling Groups

When you deploy to Amazon EC2 Auto Scaling groups, AWS CodeDeploy will automatically run the latest successful deployment on any new instances created when the group scales out. If the deployment fails on an instance, it updates to maintain the count of healthy instances. For this reason, AWS does not recommend that you associate the same Auto Scaling group with multiple deployment groups (for example, you want to deploy multiple applications to the same Auto Scaling group). If both deployment groups perform a deployment at roughly the same time and the first deployment fails on the new instance, it terminates by AWS CodeDeploy. The second deployment, unaware that the instance terminated, will not fail until the deployment times out (the default timeout value is 1 hour). Instead, you should combine your application deployments into one or consider the use of multiple Auto Scaling groups with smaller instance types.

Deployment Configuration

You use deployment configurations to drive how quickly Amazon EC2 on-premises instances update by AWS CodeDeploy. You can configure deployments to deploy to all instances in a deployment group at once or subgroups of instances at a time, or you can create an entire new group of instances (blue/green deployment). A deployment configuration also specifies the fault tolerance of deployments, so you can roll back changes if a specified number or percentage of instances or functions in your deployment group fail to complete their deployments and signal success back to AWS CodeDeploy.

Amazon EC2 On-Premises Deployment Configurations

When you deploy to Amazon EC2 on-premises instances, you can configure either in-place or blue/green deployments.

In-Place deployments These deployments recycle currently running instances and deploy revisions on existing instances.

Blue/Green deployments These deployments replace currently running instances with sets of newly created instances.

In both scenarios, you can specify wait times between groups of deployed instances (batches). Additionally, if you register the deployment group with an elastic load balancer, newly deployed instances also register with the load balancer and are subject to its health checks.

The deployment configuration specifies success criteria for deployments, such as the minimum number of healthy instances that must pass health checks during the deployment process. This is done to maintain required availability during application updates. AWS CodeDeploy provides three built-in deployment configurations.

CodeDeployDefault.AllAtOnce

For in-place deployments, AWS CodeDeploy will attempt to deploy to all instances in the deployment group at the same time. The success criteria for this deployment configuration requires that at least once instance succeed for the deployment to be successful. If all instances fail the deployment, then the deployment itself fails.

For blue/green deployments, AWS CodeDeploy will attempt to deploy to the entire set of replacement instances at the same time and follows the same success criteria as in-place deployments. Once deployment to the replacement instances succeeds (at least one instance deploys successfully), traffic routes to all replacement instances at the same time. The deployment fails only if all traffic routing to replacement instances fails.

CodeDeployDefault.HalfAtATime

For in-place deployments, up to half of the instances in the deployment group deploy at the same time (rounded down). Success criteria for this deployment configuration requires that at least half of the instances (rounded up) deploy successfully.

Blue/green deployments use the same rules for the replacement environment, with the exception that the deployment will fail if less than half of the instances in the replacement environment successfully handle rerouted traffic.

CodeDeployDefault.OneAtATime

For in-place and blue/green deployments, this is the most stringent of the built-in deployment configurations, as it requires all instances to deploy the new application revision successfully, with the exception of the final instance in the deployment. For deployment groups with only one instance, the instance must complete successfully for the deployment to complete.

For blue/green deployments, the same rule applies for traffic routing. If all but the last instance registers successfully, the deployment is successful (with the exception of single-instance environments, where it must register without error).

CodeDeployDefault.AllAtOnce

For in-place deployments, AWS CodeDeploy will attempt to deploy to all instances in the deployment group at the same time. The success criteria for this deployment configuration requires that at least one instance succeed for the deployment to be successful. If all instances fail the deployment, then the deployment itself fails.

For blue/green deployments, AWS CodeDeploy will attempt to deploy to the entire set of replacement instances at the same time and follows the same success criteria as in-place deployments. Once deployment to the replacement instances succeeds (at least one instance deploys successfully), traffic routes to all replacement instances at the same time. The deployment fails only if all traffic routing to replacement instances fails.

CodeDeployDefault.HalfAtATime

For in-place deployments, up to half of the instances in the deployment group deploy at the same time (rounded down). Success criteria for this deployment configuration requires that at least half of the instances (rounded up) deploy successfully.

Blue/green deployments use the same rules for the replacement environment, with the exception that the deployment will fail if less than half of the instances in the replacement environment successfully handle rerouted traffic.

CodeDeployDefault.OneAtATime

This is the most stringent of the built-in deployment configurations, as it requires that all instances successfully deploy the new application revision (both in-place and blue/green deployments), with the exception of the final instance in the deployment. For deployment groups with only one instance, the instance must complete successfully for the deployment to complete.

For blue/green deployments, the same rule applies for traffic routing. If all but the last instance registers successfully, the deployment is successful (with the exception of single-instance environments, where it must register without error).

AWS Lambda Deployment Configurations

AWS CodeDeploy handles updates to AWS Lambda functions differently than to Amazon EC2 or on-premises instances. When you deploy to AWS Lambda, the deployment configuration specifies the traffic switching policy to follow, which stipulates how quickly to route requests from the original function versions to the new versions. You can configure AWS CodeDeploy to deploy instances only in a blue/green fashion. AWS Lambda does not support in-place deployments. This is because AWS CodeDeploy will deploy updates to new functions.

AWS CodeDeploy supports three methods for handling traffic switching in an AWS Lambda environment.

Canary

Traffic shifts in two percentage-based increments. The first increment routes to the new function version, and it is monitored for the number of minutes you define. After this time period, the remainder of traffic routes to the new version if the initial increment of request executes.

AWS CodeDeploy provides a number of built-in canary-based deployment configurations, such as CodeDeployDefault.LambdaCanary10Percent15Minutes. If you use this deployment configuration, 10 percent of traffic shifts in the first increment and is monitored for 15 minutes. After this time period, the 90 percent of traffic that remains shifts to the new function version. You can create additional configurations as needed.

Linear

Traffic can be shifted in a number of percentage-based increments, with a set number of minutes between each increment. During the waiting period between each increment, the requests routed to the new function versions must complete successfully for the deployment to continue.

AWS CodeDeploy provides a number of built-in linear deployment configurations, such as CodeDeployDefault.LambdaLinear10PercentEvery1Minute. With this configuration, 10 percent of traffic is routed to the new function version every minute, until all traffic is routed after 10 minutes.

All-at-Once

All traffic is shifted at once to the new function versions.

Application

An application is a logical grouping of a deployment group, revision, and deployment configuration. This serves as a reference to the entire set of objects needed to complete a deployment to your instances or functions.

AppSpec File

The AppSpec configuration file is a JSON or YAML file that manages deployments on instances or functions in your environment. The actual format and purpose of an AppSpec file differs between Amazon EC2/on-premises and AWS Lambda deployments.

Amazon EC2 On-Premises AppSpec

For Amazon EC2 on-premises deployments, the AppSpec file must be YAML formatted and follow the YAML specifications for spacing and indentation. You place the AppSpec file (appspec.yml) in the root of the revision’s source code directory structure (it cannot be in a subfolder).

When you deploy to Amazon EC2 on-premises instances, the AppSpec file defines the following:

  • A mapping of files from the revision and location on the instance
  • The permissions of files to deploy
  • Scripts to execute throughout the lifecycle of the deployment

The AppSpec file specifies scripts to execute at each stage of the deployment lifecycle. These scripts must exist in the revision for AWS CodeDeploy to call them successfully; however, they can call any other scripts, commands, or tools present on the instance. The AWS CodeDeploy agent uses the hooks section of the AppSpec file to reference which scripts must execute at specific times in the deployment lifecycle. When the deployment is at the specified stage (such as ApplicationStop), the AWS CodeDeploy agent will execute any scripts in that stage in the hooks section of the AppSpec file. All scripts must return an exit code of 0 to be successful.

For any files to place on the instance, the AWS CodeDeploy agent refers to the files section of the AppSpec file, where a mapping of files and directories in the revision dictates where on the instance these files reside and with what permissions. Here’s an example of an appspec.yml file:

version: 0.0
os: linux
files:
  - source: /
    destination: /var/www/html/WordPress
hooks:
  BeforeInstall:
    - location: scripts/install_dependencies.sh
      timeout: 300
      runas: root
  AfterInstall:
    - location: scripts/change_permissions.sh
      timeout: 300
      runas: root
  ApplicationStart:
    - location: scripts/start_server.sh
    - location: scripts/create_test_db.sh
      timeout: 300
      runas: root
  ApplicationStop:
    - location: scripts/stop_server.sh
      timeout: 300
      runas: root

In the previous example, the following events occur during deployment:

  • During the install phase of the deployment, all files from the revision (source: /) are placed on the instance in the /var/www/html/WordPress directory.
  • The install_dependencies.sh script (located in the scripts directory of the revision) executes during the BeforeInstall phase.
  • The change_permissions.sh script executes in the AfterInstall phase.
  • The start_server.sh and create_test_db.sh scripts execute in the ApplicationStart phase.
  • The stop_server.sh script executes in the ApplicationStop phase.

The high-level structure of an Amazon EC2 on-premises AppSpec file is as follows:

version: 0.0
os: operating-system-name
files:
  source-destination-files-mappings
permissions:
  permissions-specifications
hooks:
  deployment-lifecycle-event-mappings

version Currently the only supported version number is 0.0.

os The os section defines the target operating system of the deployment group. Either windows or linux (Amazon Linux, Ubuntu, or Red Hat Enterprise Linux) is supported.

files The files section defines the mapping of revision files and their location to deploy on-instance during the install lifecycle event. This section is not required if no files are being copied from the revision to your instance. The files section supports a list of source/ destination pairs.

files:
  - source: source-file-location
    destination: destination-file-location

The source key refers to a file or a directory’s local path within the revision (use / for all files in the revision). If source refers to a file, the file copies to destination, specified as the fully qualified path on the instance. If source refers to a directory, the directory contents copy to the instance.

permissions For any deployed files or directories, the permissions section specifies the permissions to apply to files and directories on the target instance. You can also apply permissions to files on the instance by AWS CodeDeploy using the files directive of the AppSpec configuration.

permissions:
  - object: object-specification
    pattern: pattern-specification
    except: exception-specification
    owner: owner-account-name
    group: group-name
    mode: mode-specification
    acls:
      - acls-specification
    context:
      user: user-specification
      type: type-specification
      range: range-specification
    type:
      - object-type

Each object specification includes a set of files or directories to which the permissions will apply. You can select files based on a pattern expression and ignore them with a comma-delimited list in the except property. The owner, group, and mode properties correspond to their Linux equivalents. You can apply access control lists with the acls property, providing a list of user/group permissions assignments (such as u:ec2-user:rw). The context property is reserved for SELinux-enabled instances. This property corresponds to a set of context labels to apply to objects. Lastly, you use the type property to specify to which types of objects (file or directory) the specified permissions will apply.

Symbol of Note Windows instances do not support permissions.

hooks The hooks section specifies the scripts to run at each lifecycle event and under what user context to execute them.

One or more scripts can execute for each lifecycle hook.

ApplicationStop Before the application revision downloads to the instance, this lifecycle event can stop any running services on the instance that would be affected by the update. It is important to note that, since the revision has not yet been downloaded, the scripts execute from the previous revision. Because of this, the ApplicationStop hook does not run on the first deployment to an instance.

DownloadBundle The AWS CodeDeploy agent uses this lifecycle event to copy application revision files to a temporary location on the instance.

Linux /opt/codedeploy-agent/deployment-root/[deployment-group-id]/[deployment-id]/deployment-archive

WindowsC:ProgramDataAmazonCodeDeploy[deployment-group-id][deployment-id]deployment-archive

This event cannot run custom scripts, as it is reserved for the AWS CodeDeploy agent.

BeforeInstall Use this event for any pre-installation tasks, such as to clear log files or to create backups.

Install This event is reserved for the AWS CodeDeploy agent.

AfterInstall Use this event for any post-installation tasks, such as to modify the application configuration.

ApplicationStart Use this event to start any services that were stopped during the ApplicationStop event.

ValidateService Use this event to verify deployment completed successfully.

If your deployment group is registered with a load balancer, additional lifecycle events become available. These can be used to control certain behaviors as the instance is registered or deregistered from the load balancer.

BeforeBlockTraffic Use this event to run tasks before the instance is deregistered from the load balancer.

BlockTraffic This event is reserved for the AWS CodeDeploy agent.

AfterBlockTraffic Use this event to run tasks after the instance is deregistered from the load balancer.

BeforeAllowTraffic Similar in concept to BeforeBlockTraffic, this event occurs before instances register with the load balancer.

AllowTraffic This event is reserved for the AWS CodeDeploy agent.

AfterAllowTraffic Similar in concept to AfterBlockTraffic, this event occurs after instances register with the load balancer.

hooks:
   deployment-lifecycle-event-name:
     - location: script-location
       timeout: timeout-in-seconds
       runas: user-name

In the hooks section, the lifecycle name must match one of the previous event names, which are not reserved for the AWS CodeDeploy agent. The location property refers to the relative path in the revision archive where the script is located. You can configure an optional timeout to limit how long a script can run before it is considered failed. (Note that this does not stop the script’s execution.) The maximum script duration is 1 hour (3,600 seconds) for each lifecycle event. Lastly, the runas property can specify the user to execute the script. This user must exist on the instance and cannot require a password.

Figure 7.20 displays lifecycle hooks and their availability for in-place deployments with and without a load balancer.

The figure shows lifecycle hooks and their availability for in-place deployments with and without a load balancer.

Figure 7.20 Lifecycle hook availability with load balancer

Figure 7.21 displays lifecycle hooks and their availability for blue/green deployments.

The figure shows lifecycle hooks and their availability for blue/green deployments.

Figure 7.21 Lifecycle hook availability with blue/green deployments

AWS Lambda AppSpec

When you deploy to AWS Lambda functions, the AppSpec file can be in JSON or YAML format, and it specifies the function versions to deploy as well as other functions to execute for validation testing.

Symbol of Note AWS Lambda deployments do not use the AWS CodeDeploy agent.

The high-level structure of an AWS Lambda deployment AppSpec file is as follows:

version: 0.0
resources:
  lambda-function-specifications
hooks:
  deployment-lifecycle-event-mappings

version Currently the only supported version number is 0.0.

resources The resources section defines the AWS Lambda functions to deploy.

resources:
  - name-of-function-to-deploy:
      type: "AWS::Lambda::Function"
      properties:
        name: name-of-lambda-function-to-deploy
        alias: alias-of-lambda-function-to-deploy
        currentversion: lambda-function-version-traffic-currently-points-to
        targetversion: lambda-function-version-to-shift-traffic-to

Name each function in the resources list both as the list item name and in the name property. The alias property specifies the function alias, which maps from the version specified in currentversion to the version specified in targetversion after the update deploys.

hooks The hooks section specifies the additional AWS Lambda functions to run at specific stages of the deployment lifecycle to validate success. The following lifecycle events support hooks in AWS Lambda deployments:

BeforeAllowTraffic For running any tasks prior to traffic shifting taking place

AfterAllowTraffic For any tasks after all traffic shifting has completed

hooks:
   - BeforeAllowTraffic: BeforeAllowTrafficHookFunctionName
   - AfterAllowTraffic: AfterAllowTrafficHookFunctionName

Figure 7.22 displays the lifecycle hook availability for AWS Lambda deployments.

Symbol of Note AWS CodeDeploy reserves the Start, AllowTraffic, and End lifecycle events.

The figure shows lifecycle hook availability for AWS Lambda deployments.

Figure 7.22 Lifecycle hook availability for AWS Lambda deployments

For any functions in the hooks section, the function is responsible for notifying AWS CodeDeploy of success or failure with the PutLifecycleEventHookExecutionStatus call API from within your validation function. Here’s an example for Node.js:

CodeDeploy the prepared validation test results.
codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
    if (err) {
        // Validation failed.
        callback('Validation test failed');
    } else {
        // Validation succeeded.
        callback(null, 'Validation test succeeded');
    }
});

AWS CodeDeploy Agent

The AWS CodeDeploy agent is responsible for driving and validating deployments on Amazon EC2 on-premises instances. The agent currently supports Amazon Linux (Amazon EC2 only), Ubuntu Server, Microsoft Windows Server, and Red Hat Enterprise Linux, and it is available as an open source repository on GitHub (https://github.com/aws/ aws-codedeploy-agent).

When the agent installs, a codedeployagent.yml configuration file copies to the instances. You can use this file to adjust the behavior of the AWS CodeDeploy agent on instances throughout various deployments. This configuration file is stored in /etc/codedeploy-agent/conf on Linux instances and C:ProgramDataAmazonAWS CodeDeploy on Windows Server instances.

The most common settings are as follows:

max_revisions Use this to configure how many application revisions to archive on an instance. If you are experiencing storage limitations on your instances, turn this value down and release some storage space consumed by the agent.

root_dir Use this to change the default storage location for revisions, scripts, and archives.

verbose Set this to true to enable verbose logging output for debugging purposes.

proxy_url For environments that use an HTTP proxy, this specifies the URL and credentials to authenticate to the proxy and connect to the AWS CodeDeploy service.

AWS CodeDeploy Service Limits

AWS CodeDeploy enforces the service limits, as shown in Table 7.5. An asterisk (*) indicates limits that you can increase with a request to AWS Support.

Table 7.5 AWS CodeDeploy Service Limits

Limit Value
Applications per account per region 100
Allowed revision file types .zip, .tar, .tar, and.gz
Concurrent deployments per deployment group 1
Concurrent deployments per account 100
Maximum deployment lifecycle event duration 3600 seconds
Custom deployment configurations per account 25
Deployment groups per application* 100
Tags per deployment group 10
Auto Scaling groups per deployment group 10
Instances per deployment 500

Using AWS CodeDeploy with AWS CodePipeline

AWS CodeDeploy can integrate automatically with AWS CodePipeline as a deployment action to deploy changes to Amazon EC2 on-premises instances or AWS Lambda functions. You can configure applications, deployment groups, and deployments directly in the AWS CodePipeline console when you create or edit a pipeline, or you can do this ahead of time with the AWS CodeDeploy console or the AWS CLI or AWS SDK.

After you define the deployment provider, application name, and deployment group in the AWS CodePipeline console, the pipeline will automatically configure to pass a pipeline artifact to AWS CodeDeploy for deployment to the specified application/group, as shown in Figure 7.23.

The figure shows a screenshot illustrating the AWS CodeDeploy for deployment to the specified application/group.

Figure 7.23 Deployment provider

AWS CodeDeploy monitors the progress of any revisions to deploy and report success or failure to AWS CodePipeline.

Summary

In this chapter, you learned about these deployment services:

  • AWS CodePipeline
  • AWS CodeCommit
  • AWS CodeBuild
  • AWS CodeDeploy

AWS CodePipeline drives application deployments starting with a source repository (AWS CodeCommit), performing builds with AWS CodeBuild, and finally deploying to Amazon EC2 instance or AWS Lambda functions using AWS CodeDeploy. You can use AWS CloudFormation to provision and manage infrastructure in your environment. By integrating this with AWS CodePipeline, you can automate the entire process of creating development, testing, and production environments into a fully hands-off process. In a fully realized enterprise as code, a single commit to a source repository can kick off processes such as those shown in Figure 7.1.

Exam Essentials

Know the difference between continuous integration, continuous delivery, and continuous deployment. Continuous integration is the practice where all code changes merge into a repository. Continuous delivery is the practice where all code changes are prepared for release. Continuous deployment is the practice where all code is prepared for release and automatically released to production environments.

Know the basics of AWS CodePipeline. AWS CodePipeline contains the steps in the continuous integration and deployment pipeline (CI/CD) workflow, driving automation between different tasks after assets have been committed to a repository or saved in a bucket. AWS CodePipeline uses stages, which correspond to different steps in a workflow. Within each stage, different actions can perform tasks in series or in parallel. Transitions between stages can be automatic or require manual approval by an authorized user.

Understand how revisions can move through a pipeline. Revisions move automatically between stages in a pipeline, provided that all actions in the preceding stage complete. If a manual approval is required, the revision will not proceed until an authorized user allows it to do so. When two changes are pushed to a source repository in a short time span, the latest of the two changes will proceed through the pipeline.

Know the different pipeline actions that are available. A pipeline stage can include one or more actions: build, test, deploy, and invoke. You can also create custom actions.

Know how to deploy a cross-account pipeline. The account containing the pipeline must create a KMS key that can be used by both AWS CodePipeline and the other account. The pipeline account must also specify a bucket policy on the assets bucket that the pipeline uses, which allows the second account to access assets. The AWS CodePipeline service IAM role must include a policy that allows it to assume a role in the second account. The second account must have a role that can be assumed by the pipeline account, which allows the pipeline account to deploy resources and access the assets bucket.

Know the basic concepts of AWS CodeCommit. AWS CodeCommit is a Git-based repository service. It is fully compatible with existing Git tooling. AWS CodeCommit provides various benefits, such as encryption in transit and at rest; automatic scaling to handle increases in activity; access control using IAM users, roles, and policies; and HTTPS/SSH connectivity. AWS CodeCommit supports normal Git workflows, such as pull requests.

Know how to use the credential helper to connect to repositories. It is possible to connect to AWS CodeCommit repositories using IAM credentials. The AWS CodeCommit credential helper translates an IAM access key and secret access key into valid Git credentials. This requires the AWS CLI and a Git configuration file that specifies the credential helper.

Understand the different strategies for migrating to AWS CodeCommit. You can migrate an existing Git repository by cloning to your local workstation and adding a new remote, pointing to the AWS CodeCommit repository you create. You can push the repository contents to the new remote. You can migrate unversioned content in a similar manner; however, you must create a new local Git repository (instead of cloning an existing one). Large repositories can be migrated incrementally because large pushes may fail because of network issues.

Know the basics of AWS CodeBuild. AWS CodeBuild allows you to perform long-running build tasks repeatedly and reliably without having to manage the underlying infrastructure. You are responsible only for specifying the build environment settings and the actual tasks to perform.

Know the basics of AWS CodeDeploy. AWS CodeDeploy standardizes and automates deployments to Amazon EC2 instances, on-premises servers, and AWS Lambda functions. Deployments can include application/static files, configuration tasks, or arbitrary scripts to execute. For Amazon EC2 on-premises deployments, a lightweight agent is required.

Understand how AWS CodeDeploy works with Amazon EC2 Auto Scaling groups. When you deploy to Amazon EC2 Auto Scaling groups, AWS CodeDeploy will automatically run the last successful deployment on any new instances that you add to the group. If the deployment fails on the instance, it will be terminated and replaced (to maintain the desired count of healthy instances). If two deployment groups for separate AWS CodeDeploy applications specify the same Auto Scaling group, issues can occur. If both applications deploy at roughly the same time and one fails, the instance will be terminated before success/failure can be reported for the second application deployment. This will result in AWS CodeDeploy waiting until the timeout period expires before taking any further action.

Resources to Review

Exercises

Exercise 7.1

Create an AWS CodeCommit Repository and Submit a Pull Request

This exercise demonstrates how to use AWS CodeCommit to submit and merge pull requests to a repository.

  1. Create an AWS CodeCommit repository with a name and description. You do not need to configure email notifications for repository events.
  2. In the AWS CodeCommit console, select Create File to add a simple markdown file to test the repository.
  3. Clone the repository to your local machine with HTTPS or SSH authentication.
  4. Create a file locally, commit it to the repository, and push it to test the AWS CodeCommit.
  5. Create a feature branch from the master branch in the repository.
  6. Edit the file and commit the changes to the feature branch.
  7. Use the AWS CodeCommit console to create a pull request. Use the master branch of the repository as the destination and the feature branch as the source.
  8. After the pull request successfully creates, merge the changes from the feature branch with the master branch.

The pull request has been merged with the master branch, which can be confirmed by viewing the source code of the markdown file in the master branch.

Exercise 7.2

Create an Application in AWS CodeDeploy

This exercise demonstrates how to use AWS CodeDeploy to perform an in-place deployment to Amazon EC2 instances in your account.

  1. Create a new application in the AWS CodeDeploy console.

    For the compute platform type, select EC2 On-premises.

  2. Create a new deployment group for your application. Specify the following values:

    Deployment type In-place

    Environment configuration Amazon EC2 instances

    Tag group Create a tag group that is easy to identify, such as a “Name” for the key, and “CodeDeployInstance” as the value.

    Load balancer Clear the Enable load balancing check box.

  3. Launch new Amazon EC2 instance.

    Make sure to specify the tag value chosen in the previous step.

  4. Download the sample application bundle to your local machine for future updates.

    Sample application bundles for each operating system can be found using the following links:

    Windows Server https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-windows.html

    Amazon Linux or Red Hat Enterprise Linux (RHEL) https://docs.aws.amazon.com/codedeploy/latest/userguide/tutorials-wordpress.html

  5. Create a deployment group, and verify that the sample application deploys.
  6. Update the application code, and submit a new deployment to the deployment group.
  7. Verify your changes after the deployment completes.

Exercise 7.3

Create an AWS CodeBuild Project

This exercise demonstrates how to use AWS CodeBuild to perform builds and the compilation of artifacts prior to deployment to Amazon EC2 instances.

  1. Create an Amazon S3 bucket to hold artifacts.
  2. Upload two or more arbitrary files to the bucket.
  3. Use the AWS CodeBuild console to create a build project with the following settings:

    Project name Provide a name of your choice.

    Source Use Amazon S3.

    Bucket Provide the name of the bucket you created.

    S3 object key Provide the name of one of the objects you uploaded.

    Environment image Select the Managed Image type.

    Operating system Use Ubuntu.

    Runtime Use Python.

    Runtime version Select a version of your choice.

    Service role Select New Service Role.

    Role name Provide a name for your service role.

    Build specifications Select Insert Build Commands.

    Build commands Select Switch To Editor and enter the following. Replace the Amazon S3 object paths with paths to the objects you uploaded to your bucket.

    version: 0.2
     
    phases:
      build:
        commands:
          - aws s3 cp s3://yourbucket/file1 /tmp/file1
          - aws s3 cp s3://yourbucket/file2 /tmp/file2
    artifacts:
      files:
        - /tmp/file1
        - /tmp/file2

    Artifact Type Use Amazon S3.

    Bucket name Select your Amazon S3 bucket.

    Artifacts packaging Select Zip.

  4. Save your build project.
  5. Run your build project, and observe the output archive file created in your Amazon S3 bucket.

Review Questions

  1. You have two AWS CodeDeploy applications that deploy to the same Amazon EC2 Auto Scaling group. The first deploys an e-commerce app, while the second deploys custom administration software. You are attempting to deploy an update to one application but cannot do so because another deployment is already in progress. You do not see any instances undergoing deployment at this time. What could be the cause of this?

    1. If both deployment groups reference the same Auto Scaling group, a failure of the first group’s deployment can block the second until the deployment times out. Since the instance that failed deployment has been terminated from the Auto Scaling group, the AWS CodeDeploy agent is unable to provide results to the service.
    2. The AWS CodeDeploy agent is not installed on the instances as part of the launch configuration user data script.
    3. If both deployment groups reference the same Auto Scaling group, a failure of the first group’s deployment can block the second until the deployment times out. Since the instance that failed deployment has been terminated from the Auto Scaling group, the AWS CodeDeploy service is unable to request status updates from the Amazon EC2 API.
    4. The AWS CodeDeploy agent is not installed in the Amazon Machine Image (AMI) being used.
  2. If you specify a hook script in the ApplicationStop lifecycle event of an AWS CodeDeploy appspec.yml, will it run on the first deployment to your instance(s)?

    1. Yes
    2. No
    3. The ApplicationStop lifecycle event does not exist.
    4. It will run only if your application is running.
  3. If a single pipeline contains multiple sources, such as an AWS CodeCommit repository and an Amazon S3 archive, under what circumstances will the pipeline be triggered?

    1. When either a commit is pushed to the repository or the archive is updated, regardless of timing.
    2. When a commit is pushed to the repository and the archive is updated at the same time.
    3. When either a commit is pushed to the repository or the archive is updated, but not when both are updated at the same time.
    4. AWS CodePipeline does not support multiple sources in the same pipeline.
  4. If you want to implement a deployment pipeline that deploys both source files and large binary objects to instance(s), how would you best achieve this while taking cost into consideration?

    1. Store both the source files and binary objects in AWS CodeCommit.
    2. Build the binary objects into the AMI of the instance(s) being deployed. Store the source files in AWS CodeCommit.
    3. Store the source files in AWS CodeCommit. Store the binary objects in an Amazon S3 archive.
    4. Store the source files in AWS CodeCommit. Store the binary objects on an Amazon Elastic Block Store (Amazon EBS) volume, taking snapshots of the volume whenever a new one needs to be created.
    5. Store the source files in AWS CodeCommit. Store the binary objects in Amazon S3 and access them from an Amazon CloudFront distribution.
  5. Your team is building a deployment pipeline to a sensitive application in your environment using AWS CodeDeploy. The application consists of an Amazon EC2 Auto Scaling group of instances behind an Elastic Load Balancing load balancer. The nature of the application requires 100 percent availability for both successful and failed deployments. The development team want to deploy changes multiple times per day. How would this be achieved at the lowest cost and with the fastest deployments?

    1. Rolling deployments with an additional batch
    2. Rolling deployments without an additional batch
    3. Blue/green deployments
    4. Immutable updates
  6. What would cause an access denied error when attempting to download an archive file from Amazon S3 during a pipeline execution?

    1. Insufficient user permissions for the user initiating the pipeline
    2. Insufficient user permissions for the user uploading the Amazon S3 archive
    3. Insufficient role permissions for the Amazon S3 service role
    4. Insufficient role permissions for the AWS CodePipeline service role
  7. How do you output build artifacts from AWS CodeBuild to AWS CodePipeline?

    1. Write the outputs to STDOUT from the build container.
    2. Specify artifact files in the buildspec.yml configuration file.
    3. Upload the files to Amazon S3 from the build environment.
    4. Output artifacts are not supported with AWS CodeBuild.
  8. What would be the most secure means of providing secrets to an AWS CodeBuild environment?

    1. Create a custom build environment with the secrets included in configuration files.
    2. Upload the secrets to Amazon S3 and download the object when the build job runs. Protect the bucket and object with an appropriate bucket policy.
    3. Save the secrets in AWS Systems Manager Parameter Store and query them as needed. Encrypt the secrets with an AWS Key Management Service (AWS KMS) key. Include appropriate AWS KMS permissions to your build environment’s IAM role.
    4. Include the secrets in the source repository or archive.
  9. In which of the pipeline actions can you execute AWS Lambda functions?

    1. Invoke
    2. Deploy
    3. Build
    4. Approval
    5. Test
  10. In what ways can pipeline actions be ordered in a stage? (Select TWO.)

    1. Series
    2. Parallel
    3. Stages support only one action each
    4. First-in-first-out (FIFO)
    5. Last-in-first-out (LIFO)
  11. If you would like to delete an AWS CloudFormation stack before you deploy a new one in your pipeline, what would be the correct set of actions?

    1. One action that specifies “Create or update a stack.”
    2. Two actions: the first specifies “Create or update a stack,” and the second specifies “Delete a stack.”
    3. Three actions: the first specifies “Delete a stack,” the second specifies “Create or update a stack,” and the third specifies “Replace a failed stack.”
    4. Two actions: the first specifies “Delete a stack,” and the second specifies “Create or update a stack.”
  12. How can you connect to an AWS CodeCommit repository without Git credentials?

    1. It is not possible.
    2. HTTPS
    3. SSH
    4. AWS CodeCommit credential helper
  13. Of the following, which event cannot be used to generate notifications to an Amazon Simple Notification Service (SNS) topic from AWS CodeCommit without using a trigger?

    1. Pull Request Creation
    2. Commit Comments
    3. Commit Creation
    4. Pull Request Comments
  14. Which pipeline actions support AWS CodeBuild projects? (Select TWO.)

    1. Invoke
    2. Deploy
    3. Build
    4. Approval
    5. Test
  15. Can data passed to build projects using environment variables be encrypted or protected?

    1. Yes, this is supported natively by AWS CodeBuild.
    2. No, it is not supported.
    3. No, but this can be enabled in the console.
    4. No, but this can be supported using other AWS products and services.
  16. What is the only deployment type supported by on-premises instances?

    1. In-place
    2. Blue/green
    3. Immutable
    4. Progressive
  17. If your AWS CodeDeploy configuration includes creation of a file, nginx.conf, but the file already exists on the server (prior to the use of AWS CodeDeploy), what is the default behavior that will occur during deployment?

    1. The file will be replaced.
    2. The file will be renamed nginx.conf.bak, and the new file will be created.
    3. The deployment will fail.
    4. The deployment will continue, but the file will not be modified.
  18. How does AWS Lambda support in-place deployments?

    1. Function versions are overwritten during the deployment.
    2. New function versions are created, and then version numbers are switched.
    3. AWS Lambda does not support in-place deployments.
    4. Function aliases are overwritten during the deployment.
  19. What is the minimum number of stages required by a pipeline in AWS CodePipeline?

    1. 0
    2. 1
    3. 2
    4. 3
  20. If an instance is running low on storage, and you find that there are a large number of deployment revisions stored by AWS CodeDeploy, what can be done to free up this space permanently?

    1. Delete the old revisions.
    2. Add an additional Amazon EBS volume.
    3. Configure the AWS CodeDeploy agent to store fewer revisions.
    4. Delete all of the revisions, and push all new code.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset