Bootstrapping nodes under Configuration Management (end-to-end IaC)

Without further delay, let us get our old VPC re-deployed along with a configuration-managed web service inside it.

Terraform will spawn the VPC, ELB, and EC2 nodes then bootstrap the SaltStack workflow with the use of EC2 UserData. Naturally, we strive to reuse as much code as possible; however, our next deployment requires some changes to the TF templates.

resources.tf:

  • We do not need the private subnets/route tables, NAT, nor RDS resources this time, so we have removed these, making the deployment a bit faster.
  • We will be using an IAM Role to grant permission to the EC2 node to access the CodeCommit repository.
    • We have declared the role:
      resource "aws_iam_role" "terraform-role" {
      name = "terraform-role"path = "/"...
      
    • We have added and associated a policy (granting read access to CodeCommit) with that role:
      resource "aws_iam_role_policy" "terraform-policy" {
      name = "terraform-policy"
      role = "${aws_iam_role.terraform-role.id}"...
      
    • We have created and associated an instance profile with the role:
      resource "aws_iam_instance_profile" "terraform-profile" {
      name = "terraform-profile"
      roles = ["${aws_iam_role.terraform-role.name}"]
      ...
      
    • We have updated the Auto Scaling launch-configuration with the instance profile ID:
      resource "aws_launch_configuration" "terraform-lcfg"   
                   {...iam_instance_profile = 
                   "${aws_iam_instance_profile.terraform-profile.id}"
                    ...
      
  • We have updated the UserData script with some SaltStack bootstrap instructions, to install Git and SaltStack, checkout and put our Salt code in place and finally run Salt:
       user_data = <<EOF 
       #!/bin/bash 
       set -euf -o pipefail 
       exec 1> >(logger -s -t $(basename $0)) 2>&1 
       # Install Git and set CodeComit connection settings 
       # (required for access via IAM roles) 
       yum -y install git 
       git config --system credential.helper 
       '!aws codecommit credential-helper $@' 
       git config --system credential.UseHttpPath true 
       # Clone the Salt repository 
       git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/
       salt/srv/salt; chmod 700 /srv/salt 
       # Install SaltStack 
       yum -y install https://repo.saltstack.com/yum/amazon/
       salt-amzn-repo-latest-1.ami.noarch.rpm 
       yum clean expire-cache; yum -y install salt-minion; 
       chkconfig salt-minion off 
       # Put custom minion config in place (for enabling masterless mode) 
       cp -r /srv/salt/minion.d /etc/salt/ 
       # Trigger a full Salt run 
       salt-call state.apply 
       EOF 
       We have moved our EC2 node (the Auto Scaling group) 
       to a public subnet and allowed incoming SSH traffic 
       so that we can connect and play with Salt on it: 
       resource "aws_security_group" "terraform-ec2" {ingress { 
       from_port = "22" 
       to_port = "22" 
       ...resource "aws_autoscaling_group" "terraform-asg" { 
       ... 
       vpc_zone_identifier = ["${aws_subnet.public-1.id}",
       ... 
    

variables.tf:

We have removed all RDS related variables.

outputs.tf:

We have removed RDS and NAT related outputs.

iam_user_policy.json:

This document will become useful shortly as we will need to create a new user for the deployment. We have removed RDS permissions and added IAM ones from it.

We are now ready for deployment. Pre-flight check:

Let us export our credentials and launch Terraform:

$ export AWS_ACCESS_KEY_ID='user_access_key'
$ export AWS_SECRET_ACCESS_KEY='user_secret_access_key'
$ export AWS_DEFAULT_REGION='us-east-1'$ cd Terraform/$ terraform validate
$ terraform plan...Plan: 15 to add, 0 to change, 0 to destroy.
$ terraform apply...Outputs:
ELB URI = terraform-elb-xxxxxx.us-east-1.elb.amazonaws.com
VPC ID = vpc-xxxxxx

Allow 3-5 minutes for output t2.nano to come into shape and then browse to the ELB URI from the following output:

Bootstrapping nodes under Configuration Management (end-to-end IaC)

Victory!

Try increasing the autoscaling-group-minsize and autoscaling-group-maxsize in terraform.tfvars, then re-applying the template. You should start seeing different IPs when the page is refreshed.

Given the preceding test page, we can be reasonably confident that Salt bootstrapped and applied our set of States successfully.

We did, however, enable SSH access in order to be able to experiment more with Salt, so let us do that.

We see the public IP of the node on our test page. You could SSH into it with either the terraform ec2 keypair or the default ec2-user Linux account, or if you dared create one for yourself in the users/init.sls state earlier, you could use it now.

Once connected, we can use the salt-call command (as root) to interact with Salt locally:

  • How about some Pillars:
# salt-call pillar.items
  • Or let us see what Grains we have:
# salt-call grains.items
  • Run individual States:
# salt-call state.apply nginx
  • Or execute a full run, that is of all assigned States as per the Top file:
# salt-call state.apply

After playing with our new deployment for a bit, I suspect you are going to want to try adding or changing States/Pillars or other parts of the Salt code. As per the IaC rules we agreed upon earlier, every change we make goes through Git, but let us examine what options we have for deploying those changes afterwards:

  • Pull the changes down to each minion and run salt-call
  • Provision new minions which will pull down the latest code
  • Push changes via a Salt-master

It is easy to see that the first option will work with the couple of nodes we use for testing, but is quickly going to become hard to manage at scale.

Provisioning new minions on each deployment is a valid option if masterless Salt setup is preferred; however, you need to consider the frequency of deployments in your environment and the associated cost of replacing EC2 nodes. One benefit worth nothing here is that of blue/green deployments. By provisioning new minions to serve your code changes, you get to keep the old ones around for a while which allows you to shift traffic gradually and roll back safely if needed.

Having a Salt-master would be my recommended approach for any non-dev environments. The Salt code is kept on it, so any Git changes you make, need to be pulled down only once. You can then deploy the changed States/Pillars by targeting the minions you want from the Salt-master. You could still do blue/green for major releases or you could choose to deploy to your current minions directly if it is just a minor, safe amendment, or perhaps something critical that needs to reach all minions as soon as possible.

Another powerful feature of the Salt-master is orchestration, more specifically-remote execution. With all your minions connected to it, the salt-master becomes a command center from which you have more or less full control over them.

Executing commands on the minions is done via modules from generic ones such as cmd.run, which essentially allows you to run arbitrary shell commands to more specialized ones such as nginx, postfix, selinux, or zfs. The list is quite long as you can see here: https://docs.saltstack.com/en/latest/ref/modules/all/index.html.

And if you recall the earlier section on hostnames and naming conventions, this is where one can appreciate their value. It is quite convenient to be able to execute statements like:

salt 'webserver-*' nginx.status  
salt 'db-*' postgres.db_list 

You can also use Pillars and/or Grains to add tags to your hosts, so you could further group them per location, role, department, or something similar.

In brief, here are a few key points of masterless versus a salt-master arrangement:

Salt Master

Masterless

  • A powerful, centralized control platform (must be secured adequately) which allows for quick, parallel access to a vast network of minions
  • Advanced features such as Salt Engines, Runners, Beacons, the Reactor System
  • API access

  • No salt-master node to maintain
  • Not having a single node which provides full access to the rest of them is more secure in some sense
  • Simpler Salt operation
  • After the initial Salt execution, the minions can be considered immutable

For many FOR LOOP gurus out there, parallel execution tools like Salt are very appealing. It allows you to rapidly reach out to nodes at a massive scale, whether you simply want to query their uptime, reload a service, or react to a threat alert by stopping sshd across your cluster.

Note

Before you go, please remember to delete any AWS resources used in the preceding examples (VPC, ELB, EC2, IAM, CodeCommit, and so on) to avoid unexpected charges.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset