Automating the creation of containers with CodeBuild

AWS CodeBuild is a managed service geared compiling source code, it is comparable to Jenkins, but since it's a managed service that conforms to AWS standards, it presents a different set of features and benefits. In our case, using CodeBuild over Jenkins will allow us to create containers without having to spin up, and manage an extra EC2 instance. The service also integrates well with CodePipeline which, as before, will drive our process.

We will use CloudFormation through the intermediary of Troposphere to create our CodeBuild project.

We will create a new script and call it helloworld-codebuild-cf-template.py.

We will start with our usual import, template variable creation, and description, shown as follows:

"""Generating CloudFormation template.""" 
 
from awacs.aws import ( 
    Allow, 
    Policy, 
    Principal, 
    Statement 
) 

from awacs.sts import AssumeRole
from troposphere import ( Join, Ref, Template )
from troposphere.codebuild import ( Artifacts, Environment, Project, Source ) from troposphere.iam import Role t = Template() t.add_description("Effective DevOps in AWS: CodeBuild - Helloworld container")

We will now define a new role to grant the proper permissions to our CodeBuild project. The CodeBuild project will interact with a number of AWS services such as ECR, CodePipeline, S3, and CloudWatch logs. To speed up the process, we will rely on the AWS vanilla policies to configure the permissions. This gives us the following:

t.add_resource(Role( 
    "ServiceRole", 
    AssumeRolePolicyDocument=Policy( 
        Statement=[ 
            Statement( 
                Effect=Allow, 
                Action=[AssumeRole], 
                Principal=Principal("Service", ["codebuild.amazonaws.com"]) 
            ) 
        ] 
    ), 
    Path="/", 
    ManagedPolicyArns=[ 
        'arn:aws:iam::aws:policy/AWSCodePipelineReadOnlyAccess', 
        'arn:aws:iam::aws:policy/AWSCodeBuildDeveloperAccess', 
        'arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryPowerUser', 
        'arn:aws:iam::aws:policy/AmazonS3FullAccess', 
        'arn:aws:iam::aws:policy/CloudWatchLogsFullAccess' 
    ] 
)) 

CodeBuild projects require defining a number of elements. The first one we will define is the environment. This tells CodeBuild what type of hardware and OS we need to build our project, and what needs to be preinstalled. It will also let us define extra environment variables. We will use a Docker image that AWS provides, which will give us everything we need to get our work done. The Docker image comes with the AWS, and Docker CLI preinstalled and configured. We will also define an environment variable to find our ecr repository endpoint:

environment = Environment( 
    ComputeType='BUILD_GENERAL1_SMALL', 
    Image='aws/codebuild/docker:1.12.1', 
    Type='LINUX_CONTAINER', 
    EnvironmentVariables=[ 
        {'Name': 'REPOSITORY_NAME', 'Value': 'helloworld'}, 
        {'Name': 'REPOSITORY_URI', 
            'Value': Join("", [ 
                Ref("AWS::AccountId"), 
                ".dkr.ecr.", 
                Ref("AWS::Region"), 
                ".amazonaws.com", 
                "/", 
                "helloworld"])}, 
    ], 
) 

In CodeBuild, most of the logic is defined in a resource called a buildspec. The buildspec defines the different phases of the build and what to run during those phases. It is very similar to the JenkinsFile we created in Chapter 4, Adding Continuous Integration and Continuous Deployment. The buildspec can be created as part of the CodeBuild project, or added as a YAML file to the root directory of the projects that are being built. We will opt for the first option, and define the buildspec inside our CloudFormation template. We will create a variable and store a YAML string into it. Since it's going to be a multiline variable, we will use the Python triple quote syntax.

The first key-pair we need to specify is the version of the template. The current version of CodeBuild templates is 0.1, as follows:

buildspec = """version: 0.1 

The goal of our build process is to generate a new container, tag it, and push it to the ecr repository. This will be done in three phases:

  1. The pre-build phase: Will generate the container tag and log in to ECR
  2. The build phase: Will build the new container
  3. The post-build phase: Will push the new container to ECR and update the latest tag to point to the new container

In order to easily understand what's in each container, we will tag them with the SHA of the most recent Git commit of the helloworld project. This will help in understanding what's in each container as we will be able to run commands such as git checkout <container tag> or git log <container tag>.

Because of how CodeBuild and CodePipeline are architected, getting this tag in CodeBuild requires a bit of work. We will need to run the following two complex commands:

  • The first one will exact the execution ID of the current code pipeline execution. We perform that by combining the AWS, CLI, and the environment variables CODEBUILD_BUILD_ID and CODEBUILD_INITIATOR, which are defined by the CodeBuild service when a build starts
  • Next, we will use that execution ID to extract the artifact revision ID, which happens to be the commit SHA we are looking for

Those commands use some of the most advanced features of the query filter option (you can read more about it at http://amzn.to/2k7SoLE).

In CodeBuild, each command runs in its own environment, and therefore, the easiest way to share data across steps is to use temporary files.

Right after the buildspec version definition, add the following to generate the first part of our pre-build phase, and extract the tag:

phases: 
  pre_build: 
    commands: 
      - aws codepipeline get-pipeline-state --name "${CODEBUILD_INITIATOR##*/}" --query stageStates[?actionStates[0].latestExecution.externalExecutionId==`$CODEBUILD_BUILD_ID`].latestExecution.pipelineExecutionId --output=text > /tmp/execution_id.txt 
      - aws codepipeline get-pipeline-execution --pipeline-name "${CODEBUILD_INITIATOR##*/}" --pipeline-execution-id $(cat /tmp/execution_id.txt) --query 'pipelineExecution.artifactRevisions[0].revisionId' --output=text > /tmp/tag.txt  

Our tag is now present in the /tmp/tag.txt file. We now need to generate two files. The first one will contain the argument for the Docker tag command (something like <AWS::AccountId>.dkr.ecr.us-east-1.amazonaws.com/helloworld:<tag>). To do that, we will take advantage of the environment variable defined earlier in our template. The second file will be a JSON file, which will define a key-value pair with the tag. We will use that file a bit later when we work on deploying our new containers to ECS.

After the previous commands, add the following commands to generate those files:

      - printf "%s:%s" "$REPOSITORY_URI" "$(cat /tmp/tag.txt)" > /tmp/build_tag.txt 
      - printf '{"tag":"%s"}' "$(cat /tmp/tag.txt)" > /tmp/build.json 

To conclude the pre_build section, we will log in to our ecr repository:

      - $(aws ecr get-login --no-include-email) 

We will now define our build phase. Thanks to the build_tag file created earlier, the build phase will be straightforward. We will simply call the docker build command similarly to how we did in the first section of this chapter:

  build: 
    commands: 
      - docker build -t "$(cat /tmp/build_tag.txt)" . 

We will now add the post_build phase to complete the build. In this section, we will push the newly built container to our ecr repository:

  post_build: 
    commands: 
      - docker push "$(cat /tmp/build_tag.txt)" 
      - aws ecr batch-get-image --repository-name $REPOSITORY_NAME --image-ids imageTag="$(cat /tmp/tag.txt)" --query 'images[].imageManifest' --output text | tee /tmp/latest_manifest.json 
      - aws ecr put-image --repository-name $REPOSITORY_NAME --image-tag latest --image-manifest $(cat /tmp/latest_manifest.json) 

In addition to the phases, one of the sections that is also defined in a buildspec is the artifacts section. This section is used to define what needs to be uploaded to S3 after the build succeeds and how to prepare it. We will export the build.json file and set the discard-path variable to true so we don't preserve the /tmp/ directory information. In the end, we will close our triple quote string:

artifacts: 
  files: /tmp/build.json 
  discard-paths: yes 
""" 

Now that our buildspec variable is defined, we can add our CodeBuild project resource. Through the instantiation of the project, we will set a name for our project, set its environment by calling the variable previously defined, set the service role, and configure the source and artifact resources, which define how to handle the build process and its output:

t.add_resource(Project( 
    "CodeBuild", 
    Name='HelloWorldContainer', 
    Environment=environment, 
    ServiceRole=Ref("ServiceRole"), 
    Source=Source( 
        Type="CODEPIPELINE", 
        BuildSpec=buildspec 
    ), 
    Artifacts=Artifacts( 
        Type="CODEPIPELINE", 
        Name="output" 
    ), 
))  

As always, we will conclude the creation of the script with the following print command:

print(t.to_json()) 

Our script is now complete. It should look like this: http://bit.ly/2w3nDfk.

We can save the file, add it to git, generate the CloudFormation template, and create our stack:

$ git add helloworld-codebuild-cf-template.py
$ git commit -m "Adding CodeBuild Template for our helloworld application"
$ git push
$ python helloworld-codebuild-cf-template.py > helloworld-codebuild-cf.template
$ aws cloudformation create-stack
--stack-name helloworld-codebuild
--capabilities CAPABILITY_IAM
--template-body file://helloworld-codebuild-cf.template

In a matter of minutes, our stack will be created. We will now want to take advantage of it. To do so, we will turn to CodePipeline once again and create a brand new and container-aware pipeline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset