Preparing Jenkins code

Before we proceed with Jenkins, allow me to introduce the two new helpers – Packer and Serverspec.

Packer

As described:

 

"Packer is a tool for creating machine and container images for multiple platforms from a single source configuration."

 
 --https://www.packer.io

Essentially, Packer is going to, well, pack things for us. We will feed it a template, based on which it will launch an EC2 instance, perform requested tasks (over SSH), then create an AMI from it. Packer can talk to various platforms (AWS, GCE, OpenStack, and so on) to provision resources via local shell, remote (SSH), Salt, Ansible, Chef, and others. As a HashiCorp product, it does not come as a surprise that Packer uses a templating system very similar to Terraform's.

demo-app.json

Here, we define what and how it should be provisioned. At the top, we set our variables:

  "variables": { 
    "srcAmiId": null, 
    "amiName": null, 
    "sshUser": null, 
    "instanceProfile": null, 
    "subnetId": null, 
    "vpcId": null, 
    "userDataFile": null, 
    "appVersion": null 
  } 
... 

We have exported the actual values to a variables file (see later). Setting a value to null here, makes it required. We could also fix values here or make use of environment variables (refer to https://www.packer.io/docs/templates/user-variables.html). Once defined, you can refer to variables with this syntax: {{user `srcAmiId`}}.

The next section lists the builders, in our case, AWS EC2:

  "builders": [{ 
    "type": "amazon-ebs", 
    "region": "us-east-1", 
    "source_ami": "{{user `srcAmiId`}}", 
    "instance_type": "t2.nano", 
    "ssh_username": "{{user `sshUser`}}", 
    "ami_name": "{{user `amiName`}}-{{timestamp}}", 
    "iam_instance_profile": "{{user `instanceProfile`}}", 
    "subnet_id": "{{user `subnetId`}}", 
    "vpc_id": "{{user `vpcId`}}", 
    "user_data_file": "{{user `userDataFile`}}", 
    "run_tags": { 
      "Name": "Packer ({{user `amiName`}}-{{timestamp}})", 
      "CreatedBy": "Jenkins" 
      }, 
    "tags": { 
      "Name": "{{user `amiName`}}-{{timestamp}}", 
      "CreatedBy": "Jenkins" 
      } 
  }] 

We are asking for an EBS-backed nano instance in the US-East-1 region. It is to be bootstrapped via UserData (see later in the text) and tagged as "CreatedBy": "Jenkins".

Naturally, after launching the instance, we would like to provision it:

"provisioners": [ 
    { 
      "type": "shell", 
      "inline": [  
        "echo 'Waiting for the instance to fully boot up...'", 
        "sleep 30" , 
        "echo "Setting APP_VERSION to {{user `appVersion`}}"", 
        "echo "{{user `appVersion`}}" > /tmp/APP_VERSION" 
        ] 
    } 

Here, our first provisioners is a shell command to be executed over SSH by Packer (refer to https://www.packer.io/docs/provisioners/shell.html). It pauses for 30 seconds to allow the node to complete its boot process, then creates the APP_VERSION file needed by the Salt php-fpm State.

Next, we run SaltStack:

{ 
      "type": "salt-masterless", 
      "skip_bootstrap": true, 
      "local_state_tree": "salt/states", 
      "local_pillar_roots": "salt/pillars" 
} 

Packer already knows how to run Salt via the salt-masterless provisioner. It only needs a source of States and Pillars (refer to: https://www.packer.io/docs/provisioners/salt-masterless.html). We define a relative path of salt/, which is part of a checked out Git repository (see demo-app-cdelivery here). We are opting to install Salt via UserData, hence skip_bootstrap: true.

We will get to Serverspec in a moment, but here is how we run it:

{ 
      "type": "file", 
      "source": "serverspec", 
      "destination": "/tmp/" 
}, 
{ 
      "type": "shell", 
      "inline": [  
        "echo 'Installing Serverspec tests...'", 
        "sudo gem install --no-document rake serverspec", 
        "echo 'Running Serverspec tests...'", 
        "cd /tmp/serverspec && sudo /usr/local/bin/rake spec" 
  ] 
} 

The file provisioners is used to transfer data between the remote instance and Packer (refer to https://www.packer.io/docs/provisioners/file.html). We push the local "serverspec/" folder containing our Serverspec tests to "/tmp" on the remote side. Then, run a few shell commands to install the Serverspec ruby gem and run the tests.

demo-app_vars.json

The values for the variables we defined earlier (alternatively, you could set these as a list of -var 'key=value' cmd line arguments):

{  
  "srcAmiId": "ami-6869aa05", 
  "amiName": "demo-app", 
  "sshUser": "ec2-user", 
  "instanceProfile": "demo-app", 
  "subnetId": "subnet-4d1c2467", 
  "vpcId": "vpc-bd6f0bda", 
  "userDataFile": "packer/demo-app_userdata.sh" 
} 

demo-app_userdata.sh

The EC2 UserData to bootstrap our test instance:

#!/bin/bash 
 
set -euf -o pipefail 
exec 1> >(logger -s -t $(basename $0)) 2>&1 
 
# Install SaltStack 
yum -y install https://repo.saltstack.com/yum/amazon/salt-amzn-repo-latest-1.ami.noarch.rpm 
yum clean expire-cache; yum -y install salt-minion; chkconfig salt-minion off 
 
# Put custom grains in place 
echo -e 'grains:
 roles:
  - demo-app' > /etc/salt/minion.d/grains.conf 

Much like the one we use for Jenkins. It gets SaltStack installed and puts the roles Grain in place.

Serverspec

Straight out of the front page:

 

"RSpec tests for your servers configured by CFEngine, Puppet, Ansible, Itamae or anything else. With Serverspec, you can write RSpec tests for checking your servers are configured correctly. Serverspec tests your servers' actual state by executing command locally, via SSH, via WinRM, via Docker API and so on. So you don't need to install any agent softwares on your servers and can use any configuration management tools, Puppet, Ansible, CFEngine, Itamae and so on. But the true aim of Serverspec is to help refactoring infrastructure code."

 
 --http://serverspec.org

We are going to use Serverspec to assert the final state of the EC2 instance after all other configuration tasks have been completed. It should help verify that any nonconfiguration management changes have taken effect (for example, shell commands) and that configuration management has been applied correctly (for example, no race conditions/overlaps/conflicts in States). This does introduce some overhead and some will rightly question whether it is needed in addition to a SaltStack run, so it remains a personal preference. I see it as a second layer of verification or a safety net.

The content under the serverspec/ folder has been created by running serverspec-init (refer to http://serverspec.org), selecting UNIX and then SSH. We replace the sample spec.rb file with our own:

spec/localhost/demo-app_spec.rb

require 'spec_helper' 
 
versionFile = open('/tmp/APP_VERSION') 
appVersion = versionFile.read.chomp 
 
describe package("demo-app-#{appVersion}") do 
  it { should be_installed } 
end 
 
describe service('php-fpm') do 
  it { should be_enabled } 
  it { should be_running } 
end 
 
describe service('nginx') do 
  it { should be_enabled } 
  it { should be_running } 
end 
 
describe user('veselin') do 
  it { should exist } 
  it { should have_authorized_key 'ssh-rsa ...' } 
end 

Serverspec performs tests on supported resource types (refer to http://serverspec.org/resource_types.html).

In the preceding brief example we assert that:

  • A specific version of our demo-app package has been installed
  • PHP-FPM and NGINX are running and enabled on boot
  • The SSH authorized_keys file for a given user has the expected contents

Our Serverspec tests can be run from the containing folder like so:

cd /tmp/serverspec && sudo /usr/local/bin/rake spec

It will parse any files it finds ending in _spec.rb. We use sudo only because, in this case, we are trying to read a private file (authorized_keys).

And back to Jenkins. We are already familiar with the concept of a Jenkinsfile (as used by our Integration job). In this example, we will be adding a second (Delivery) pipeline using the same approach.

Let us examine both pipeline jobs.

demo-app

This is our old Integration job that downloads the application code, runs tests against it, produces an RPM package and uploads the package to a YUM repository. We are going to add one more stage to this process:

stage "Trigger downstream" 
    build job: "demo-app-cdelivery", 
    parameters: [[$class: "StringParameterValue", name: "APP_VERSION", value: 
    "${gitHash}-1"]], wait: false 

This final stage triggers our next job that is the Delivery pipeline and passes an APP_VERSION parameter to it.

The value of this parameter is the gitHash which we have been using so far as a version string for our demo-app RPM package.

The -1 you see appended to the gitHash represents the rpm's minor version number which you can safely ignore at this time.

Setting wait to false means that we don't want to keep the current job running, waiting for the subsequently triggered one to complete.

demo-app-cdelivery

Now the fun part. The Delivery job has been passed an APP_VERSION and is ready to start, let us follow the process described in the Jenkinsfile.

We start by cleaning up our workspace, checking out the demo-app-cdelivery repository, then adding the SaltStack code on top of it. We need both codebases in order to launch an instance and configure it to be a web server:

#!groovy 
 
node { 
 
  step([$class: 'WsCleanup']) 
 
  stage "Checkout Git repo" 
    checkout scm 
 
  stage "Checkout additional repos" 
    dir("salt") { 
      git "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/salt" 
    } 

After this, we are ready to run Packer:

stage "Run Packer" 
    sh "/opt/packer validate -var="appVersion=$APP_VERSION" -var-file=packer/demo-app_vars.json packer/demo-app.json" 
    sh "/opt/packer build -machine-readable -var="appVersion=$APP_VERSION" -var-file=packer/demo-app_vars.json packer/demo-app.json | tee packer/packer.log" 

First, we validate our template and then execute, requesting a machine-readable output. Packer is going to spin up an instance, connect over SSH to it, apply all relevant Salt States, run Serverspec tests, and produce an AMI of what is essentially a web server that has the demo-app and all its prerequisites installed.

Then, we go ahead and launch a second EC2 instance; this time, form the AMI we just created:

stage "Deploy AMI" 
    def amiId = sh returnStdout: true, script:"tail -n1 packer/packer.log | awk '{printf $NF}'" 
    def ec2Keypair = "terraform" 
    def secGroup = "sg-2708ef5d" 
    def instanceType = "t2.nano" 
    def subnetId = "subnet-4d1c2467" 
    def instanceProfile = "demo-app" 
    echo "Launching an instance from ${amiId}" 
    sh "aws ec2 run-instances  
        --region us-east-1  
        --image-id ${amiId}  
        --key-name ${ec2Keypair}  
        --security-group-ids ${secGroup}  
        --instance-type ${instanceType}  
        --subnet-id ${subnetId}  
        --iam-instance-profile Name=${instanceProfile}  
        | tee .ec2_run-instances.log  
       " 
    def instanceId = sh returnStdout: true, script: "printf $(jq .Instances[0].InstanceId < .ec2_run-instances.log)" 

The variables seen at the top we get from Terraform (terraform show).

We use the aws cli to launch the instance inside the Private VPC subnet, attach the demo-app security group, the Terraform key, and demo-app instance profile to it. You will notice that we need not pass any EC2 credentials here as Jenkins is already authorized via the IAM role we assigned to it earlier.

Next, we retrieve the instanceId by parsing the aws cli JSON output with jq (refer to https://stedolan.github.io/jq).

After we have launched the instance, we set its tags, register it with ELB, and loop until its ELB status becomes InService:

sh "aws ec2 create-tags --resources ${instanceId}  
        --region us-east-1  
        --tags Key=Name,Value="Jenkins (demo-app-$APP_VERSION)" 
        Key=CreatedBy,Value=Jenkins   
       " 
 
    echo "Registering with ELB" 
    def elbId = "demo-app-elb" 
    sh "aws elb register-instances-with-load-balancer  
        --region us-east-1  
        --load-balancer-name ${elbId}  
        --instances ${instanceId}  
       " 
 
    echo "Waiting for the instance to come into service" 
    sh "while [ "x$(aws elb describe-instance-health --region us-east-1 --load-
    balancer-name ${elbId} --instances ${instanceId} | 
    jq .InstanceStates[].State | tr -d '"')" != "xInService" ]; do : ; sleep 60; 
    done" 

Now that the node is ready to serve, we can launch our improvised Load Test using AB:

  stage "Run AB test" 
    def elbUri = "http://demo-app-elb-1931064195.us-east-1.elb.amazonaws.com/"   
    sh "ab -c5 -n1000 -d -S ${elbUri} | tee .ab.log" 
    def non2xx = sh returnStdout: true, script:"set -o pipefail;(grep 'Non-2xx' .ab.log | awk '{printf $NF}') || (printf 0)" 
    def writeErr = sh returnStdout: true, script:"grep 'Write errors' .ab.log | awk '{printf $NF}'" 
    def failedReqs = sh returnStdout: true, script:"grep 'Failed requests' .ab.log | awk '{printf $NF}'" 
    def rps = sh returnStdout: true, script:"grep 'Requests per second' .ab.log | awk '{printf $4}' | awk -F. '{printf $1}'" 
    def docLen = sh returnStdout: true, script:"grep 'Document Length' .ab.log | awk '{printf $3}'" 
 
    echo "Non2xx=${non2xx}, WriteErrors=${writeErr}, FailedReqs=${failedReqs}, ReqsPerSec=${rps}, DocLength=${docLen}" 
    sh "if [ ${non2xx} -gt 10 ] || [ ${writeErr} -gt 10 ] || [ ${failedReqs} -gt 10 ] || [ ${rps} -lt 1000 ] || [ ${docLen} -lt 10 ]; then  
          echo "ERR: AB test failed" | tee -a .error.log;  
        fi  
       " 

At the end of the AB test, the various reported metrics are compared with preset thresholds and logged.

The EC2 instance is no longer needed, so it can be terminated:

 stage "Terminate test instance" 
    sh "aws ec2 terminate-instances --region us-east-1 --instance-ids ${instanceId}" 

In the final stage, the job's exit code is determined by the AB test results:

  stage "Verify test results" 
    sh "if [ -s '.error.log' ]; then  
          cat '.error.log';  
          :> '.error.log';  
          exit 100;  
        else  
          echo 'Tests OK';  
        fi  
       " 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset