Welcome to the final stage of the CI workflow - the Continuous Deployment.
We are now ready to take the AMI we produced during the Continuous Delivery step and deploy that to production.
For this process, we are going to use blue/green deployment approach. Our production environment is going to consist of ELB and two Auto scaling Groups (blue and green):
If we assume that the blue group holds our current production nodes, then upon deployment, we do the following:
As we are building on top of our existing CI pipelines, there are only a few changes we need to make to the code from the previous chapter. We need to add a few extra Terraform resources; let us take a look at those.
We add a second public and a matching private subnet so that we can distribute the production instances across multiple availability zones.
The aws_subnet
resource creates a subnet named public-2
. It takes attributes such as a VPC ID, CIDR BLOCK and AZs, the values of which we pull from variables. To compute the CIDR and AZ values we use Terraform's interpolation functions (ref: https://www.terraform.io/docs/configuration/interpolation.html):
resource "aws_subnet" "public-2" { vpc_id = "${aws_vpc.terraform-vpc.id}" cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 3)}" availability_zone = "${element(split(",",var.aws-availability-zones), count.index + 1)}" map_public_ip_on_launch = true tags { Name = "Public" } }
Next, we associate the newly created subnet with a routing table:
resource "aws_route_table_association" "public-2" { subnet_id = "${aws_subnet.public-2.id}" route_table_id = "${aws_route_table.public.id}" }
Then repeat for the Private
subnet:
resource "aws_subnet" "private-2" { vpc_id = "${aws_vpc.terraform-vpc.id}" cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 4)}" availability_zone = "${element(split(",",var.aws-availability-zones), count.index +1)}" map_public_ip_on_launch = false tags { Name = "Private" } } resource "aws_route_table_association" "private-2" { subnet_id = "${aws_subnet.private-2.id}" route_table_id = "${aws_route_table.private.id}" }
In this VPC, we are going to end up with subnets 1 and 3 public, and 2 and 4 private.
The next change is the addition of a prod ELB and a security group for it:
resource "aws_security_group" "demo-app-elb-prod" { name = "demo-app-elb-prod" description = "ELB security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "80" to_port = "80" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }
Note the protocol value of "-1"
, meaning "all":
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } resource "aws_elb" "demo-app-elb-prod" { name = "demo-app-elb-prod" security_groups = ["${aws_security_group.demo-app-elb-prod.id}"] subnets = ["${aws_subnet.public-1.id}", "${aws_subnet.public-2.id}"] cross_zone_load_balancing = true connection_draining = true connection_draining_timeout = 30 listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } tags { Name = "demo-app-elb-prod" } }
Let us also update the demo-app
security group Ingress rules to allow traffic from the ELB. To help visualize, here is our earlier diagram with more labels:
And in code:
resource "aws_security_group" "demo-app" { name = "demo-app" description = "ec2 instance security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "80" to_port = "80" protocol = "tcp" security_groups = ["${aws_security_group.demo-app-elb.id}", "${aws_security_group.demo-app-elb-prod.id}"] }
Then we introduce our blue/green Auto Scaling Groups (ASG) and a temporary launch configuration:
resource "aws_launch_configuration" "demo-app-lcfg" { name = "placeholder_launch_config" image_id = "${var.jenkins-ami-id}" instance_type = "${var.jenkins-instance-type}" iam_instance_profile = "${aws_iam_instance_profile.demo-app.id}" security_groups = ["${aws_security_group.demo-app.id}"] } resource "aws_autoscaling_group" "demo-app-blue" { name = "demo-app-blue" launch_configuration = "${aws_launch_configuration.demo-app-lcfg.id}" vpc_zone_identifier = ["${aws_subnet.private-1.id}", "${aws_subnet.private-2.id}"] min_size = 0 max_size = 0 tag { key = "ASG" value = "demo-app-blue" propagate_at_launch = true } } resource "aws_autoscaling_group" "demo-app-green" { name = "demo-app-green" launch_configuration = "${aws_launch_configuration.demo-app-lcfg.id}" vpc_zone_identifier = ["${aws_subnet.private-1.id}", "${aws_subnet.private-2.id}"] min_size = 0 max_size = 0 tag { key = "ASG" value = "demo-app-green" propagate_at_launch = true } }
The launch configuration here is really only a placeholder, so that we can define the Auto Scaling Groups (which is why we reuse the Jenkins variables). We are going to create a new, real launch configuration to serve the demo-app
later on as part of the pipeline.
A minor addition to the outputs, to give us the Production ELB endpoint:
output "ELB URI PROD" { value = "${aws_elb.demo-app-elb-prod.dns_name}" }
It is time for exercise. Using the earlier-mentioned templates and the rest of the familiar code from https://github.com/PacktPublishing/Implementing-DevOps-on-AWS/tree/master/5585_06_CodeFiles plus your previous experience you should be able to bring up a VPC plus a Jenkins instance with two pipelines, exactly as we did in the chapter on Continuous Delivery. Do not forget to update any deployment-specific details such as the following:
salt:states:users:files
serverspec
test specificationsalt:states:yum-s3:files:s3.repo
demo-app/Jenkinsfile
packer:demo-app_vars.json
demo-app-cdelivery/Jenkinsfile
I would recommend you to disable the SCM Polling in the demo-app job so that we don't trigger a run before all our downstream jobs have been configured.
Assuming that all went well, we are back where we left off: