The emphasis of this chapter will be the value of quick iteration: Quick over quality iteration, as per Boyd's law (you might recall the OODA principle mentioned in Chapter 1, What Is DevOps and Should You Care?).
By iteration, I am referring to a software development cycle, from the moment a piece of code is written, published (committed to version control), compiled (if needed), tested and finally deployed.
Continuous Integration (CI) defines the routines that should be adopted by developers plus the necessary tools to make this iteration as fast as possible.
Let us start with the human factor:
Then add a bit of automation (a CI server):
Committing smaller changes helps detect problems earlier and potentially solves them much more easily; and a developer receives feedback on their work more frequently which builds confidence that their code is in a good state.
Testing locally, where possible, greatly reduces team distraction caused by the CI pipeline tripping over minor issues.
Code reviews are beneficial at many levels. They eliminate bad coding habits as peers ensure code complies with agreed standards. They increase visibility; peers get a lot more exposure to the work of others. They help catch the errors which a machine would miss.
The Toyota Way teaches us to Stop the Line whenever a problem is detected. In terms of CI, this translates into halting the pipeline on errors and concentrating resources on fixing these. At first this might seem like an obvious way to reduce productivity and slow down the whole process, but it's been proven again and again that the initial overhead is ultimately worth it. This way you keep your technical debt to a minimum; improve code as-you-go, preventing issues from accumulating and re-surfacing at a later stage. Now is a good time to restate the test locally point made earlier. You would likely not want to interrupt your colleagues with something trivial, which could have been spotted easily before committing.
As you succeed in building this team discipline (the hard part), it is time to add some automation flavor by setting up a CI pipeline.
The CI server tirelessly monitors your code repository and reacts to changes by performing a set of tasks over and over again. I believe it is evident how this saves engineers a great amount of time and effort, not to mention the fact that they avoid having to address the monotone nature of such work.
A pipeline, say in Jenkins, would normally consist of a number of stages: individual stages can represent the checking out of the latest code, running build tasks on it, performing tests then building artefacts, where each stage runs subject to the previous one completing successfully.
This generally describes how a combination of engineer habits and some tooling can greatly improve a software development cycle. Continuous Integration helps us collaborate better, write better code, ship more often and get feedback quicker.
Users want new features released fast, developers want to see the result of their work out there - everybody wins.
We have discussed the theory, now let us bring our focus to the title of this chapter. We are going to use our acquired Terraform and Salt skills to deploy a CI environment on AWS featuring a Jenkins (v2) CI server.
Jenkins (ref: https://jenkins.io) is a popular, well established open source project focusing on automation. It comes with a long list of integrations, catering to a variety of platforms and programming languages. Meet Jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Meet+Jenkins.
The deployment of our CI environment can be broken down into three main stages:
In accordance with our Infrastructure as Code principles, this deployment will also be mostly template driven. We will try to reuse some of the Terraform and Salt code from previous chapters.
For this particular setup we can simplify our template as we will only need the VPC, some networking bits, and an EC2 instance.
Let's browse through the files in our TF repository:
The few variables we need can be grouped into VPC and EC2 related ones:
VPC
variable "aws-region" { type = "string" description = "AWS region" } variable "vpc-cidr" { type = "string" description = "VPC CIDR" } variable "vpc-name" { type = "string" description = "VPC name" } variable "aws-availability-zones" { type = "string" description = "AWS zones" }
EC2
variable "jenkins-ami-id" { type="string" description = "EC2 AMI identifier" } variable "jenkins-instance-type" { type = "string" description = "EC2 instance type" } variable "jenkins-key-name" { type = "string" description = "EC2 ssh key name" }
Following the bare variable definitions, we now supply some values:
VPC
We'll keep our deployment in US East:
aws-region = "us-east-1" vpc-cidr = "10.0.0.0/16" vpc-name = "Terraform" aws-availability-zones = "us-east-1b,us-east-1c"
EC2
A Nano instance will be sufficient for testing. Ensure the referenced key-pair exists:
jenkins-ami-id = "ami-6869aa05" jenkins-instance-type = "t2.nano" jenkins-key-name = "terraform"
As a matter of standard (good) practice we create all our resources inside a VPC:
# Set a Provider provider "aws" { region = "${var.aws-region}" } # Create a VPC resource "aws_vpc" "terraform-vpc" { cidr_block = "${var.vpc-cidr}" tags { Name = "${var.vpc-name}" } }
We add a gateway, a route table, and an Internet facing subnet from where our Jenkins instance will be launched:
IGW
# Create an Internet Gateway resource "aws_internet_gateway" "terraform-igw" { vpc_id = "${aws_vpc.terraform-vpc.id}" }
Route table
# Create public route tables resource "aws_route_table" "public" { vpc_id = "${aws_vpc.terraform-vpc.id}" route { cidr_block = "0.0.0.0/0" gateway_id = "${aws_internet_gateway.terraform-igw.id}" } tags { Name = "Public" } }
Subnet
# Create and associate public subnets with a route table resource "aws_subnet" "public-1" { vpc_id = "${aws_vpc.terraform-vpc.id}" cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 1)}" availability_zone = "${element(split(",",var.aws-availability-zones), count.index)}" map_public_ip_on_launch = true tags { Name = "Public" } } resource "aws_route_table_association" "public-1" { subnet_id = "${aws_subnet.public-1.id}" route_table_id = "${aws_route_table.public.id}" }
The security group for our Jenkins node needs to permit HTTP/S access plus SSH for convenience, so that we can access the command line if needed:
Security Group
resource "aws_security_group" "jenkins" { name = "jenkins" description = "ec2 instance security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "22" to_port = "22" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = "80" to_port = "80" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = "443" to_port = "443" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } }
IAM Role
We will use an IAM Role to grant Jenkins access to AWS services:
resource "aws_iam_role" "jenkins" { name = "jenkins" path = "/" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Action": "sts:AssumeRole", "Principal": { "Service": "ec2.amazonaws.com" }, "Effect": "Allow", "Sid": "" } ] } EOF }
IAM Role Policy
This policy will allow Jenkins to read from a codecommit repository and perform all actions (except deleting) on an s3 bucket:
resource "aws_iam_role_policy" "jenkins" { name = "jenkins" role = "${aws_iam_role.jenkins.id}" policy = <<EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "codecommit:Get*", "codecommit:GitPull", "codecommit:List*" ], "Resource": "*" }, { "Effect": "Allow", "NotAction": [ "s3:DeleteBucket" ], "Resource": "*" } ] } EOF }
IAM Profile
resource "aws_iam_instance_profile" "jenkins" { name = "jenkins" roles = ["${aws_iam_role.jenkins.name}"] }
EC2 instance
Here we define a single instance along with its bootstrap UserData script:
resource "aws_instance" "jenkins" { ami = "${var.jenkins-ami-id}" instance_type = "${var.jenkins-instance-type}" key_name = "${var.jenkins-key-name}" vpc_security_group_ids = ["${aws_security_group.jenkins.id}"] iam_instance_profile = "${aws_iam_instance_profile.jenkins.id}" subnet_id = "${aws_subnet.public-1.id}" tags { Name = "jenkins" }
Here we set the attributes needed to launch an EC2 instance, such as the instance type, the AMI to be used, security group(s), subnet and so on.
Next, we add the bootstrap shell script to help us install required packages, checkout Git repositories and run Salt:
user_data = <<EOF #!/bin/bash set -euf -o pipefail exec 1> >(logger -s -t $(basename $0)) 2>&1 # Install Git and set CodeComit connection settings # (required for access via IAM roles) yum -y install git git config --system credential.helper '!aws codecommit credential-helper $@' git config --system credential.UseHttpPath true # Clone the Salt repository git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/salt /srv/salt; chmod 700 /srv/salt # Install SaltStack yum -y install https://repo.saltstack.com/yum/amazon/salt-amzn-repo-latest-1.ami.noarch.rpm yum clean expire-cache; yum -y install salt-minion; chkconfig salt-minion off # Put custom minion config in place (for enabling masterless mode) cp -r /srv/salt/minion.d /etc/salt/ # Trigger a full Salt run salt-call state.apply EOF lifecycle { create_before_destroy = true } }
Elastic IP
Finally, we provision a static IP for Jenkins:
resource "aws_eip" "jenkins" { instance = "${aws_instance.jenkins.id}" vpc = true }
Some useful outputs to provide us with the address of the Jenkins node:
output "VPC ID" { value = "${aws_vpc.terraform-vpc.id}" } output "JENKINS EIP" { value = "${aws_eip.jenkins.public_ip}" }
And that is our VPC infrastructure defined. Now we can move onto Salt and the application stack.
You'll remember our favorite Configuration Management tool from the previous chapter. We will use SaltStack to configure the EC2 Jenkins node for us.
top.sls
We are working with a single minion, and all our states apply to it:
base: '*': - users - yum-s3 - jenkins - nginx - docker
users
We add a Linux user account, configure its SSH keys and sudo access:
veselin: user.present: - fullname: Veselin Kantsev - uid: {{ salt['pillar.get']('users:veselin:uid') }} ...
yum-s3
As part of our CI pipeline, we will be storing RPM artefacts in S3. Cob (ref: https://github.com/henrysher/cob) is a Yum package manager plugin which makes it possible to access S3 based RPM repositories using an IAM Role.
We deploy the plugin, its configuration and a repository definition (disabled for now) as managed files:
yum-s3_cob.py: file.managed: - name: /usr/lib/yum-plugins/cob.py - source: salt://yum-s3/files/cob.py yum-s3_cob.conf: file.managed: - name: /etc/yum/pluginconf.d/cob.conf - source: salt://yum-s3/files/cob.conf yum-s3_s3.repo: file.managed: - name: /etc/yum.repos.d/s3.repo - source: salt://yum-s3/files/s3.repo
Jenkins
Here comes the lead character – Mr Jenkins. We make use of Docker in our CI pipeline, hence the include
following. Docker allows us to run the different pipeline steps in isolation, which makes dependency management much easier and helps keeps the Jenkins node clean.
include: - docker
Also we ensure Java and a few other prerequisites get installed:
jenkins_prereq: pkg.installed: - pkgs: - java-1.7.0-openjdk - gcc - make - createrepo
Then, install Jenkins itself:
jenkins: pkg.installed: - sources: - jenkins: http://mirrors.jenkins-ci.org/redhat-stable/jenkins-2.7.1-1.1.noarch.rpm - require: - pkg: jenkins_prereq ...
NGINX
We will use NGINX as a reverse proxy and an SSL termination point. That is not to say that Jenkins cannot serve on its own, it is just considered better practice to separate the roles:
include: - jenkins nginx: pkg.installed: [] ... {% for FIL in ['crt','key'] %} /etc/nginx/ssl/server.{{ FIL }}: ... {% endfor %}
Docker
It is about time we mentioned Docker, given its (deserved) popularity nowadays. It is very well suited to our CI needs, providing isolated environments for the various tests and builds that may be required:
docker: pkg.installed: [] service.running: - enable: True - reload: True
top.sls
Our standalone minion gets it all:
base: '*': - users - nginx
users
Setting a password hash and a consistent UID for the Linux account:
users: veselin: uid: 5001 password: ...
NGINX
We store the SSL data in this Pillar:
nginx: crt: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- key: | -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY-----
masterless.conf
We are still using Salt in standalone (masterless) mode, so this is our extra minion
configuration:
file_client: local file_roots: base: - /srv/salt/states pillar_roots: base: - /srv/salt/pillars
Thanks to all of the preceding codes, we should be able to run Terraform and end up with a Jenkins service ready for use.
Let us give that a try.