Installing Jenkins

We already automated Jenkins installation so that it provides all the features we need out-of-the-box. Therefore, the exercises that follow should be very straightforward. If you are a Docker for Mac or Windows, minikube, or minishift user, we'll need to bring back up the VM we suspended in the previous chapter. Feel free to skip the commands that follow if you are hosting your cluster in AWS or GCP.

 1  cd cd/docker-build
2
3 vagrant up
4
5 cd ../../
6
7 export DOCKER_VM=true

If you prefer running your cluster in AWS with kops or EKS, we'll need to retrieve the AMI ID we stored in docker-ami.log in the previous chapter.

 1  AMI_ID=$(grep 'artifact,0,id' 
 2      cluster/docker-ami.log 
 3      | cut -d: -f2)
4 5 echo $AMI_ID

If GKE is your cluster of choice, we'll need to define variables G_PROJECT and G_AUTH_FILE which we'll pass to Helm Chart. We'll retrieve the project using gcloud CLI, and the authentication file is a reference to the one we stored in /cluster/jenkins/secrets directory in the previous chapter.

 1  export G_PROJECT=$(gcloud info 
 2      --format='value(config.project)')
3 4 echo $G_PROJECT
5 6 G_AUTH_FILE=$( 7 ls cluster/jenkins/secrets/key*json 8 | xargs -n 1 basename 9 | tail -n 1)
10 11 echo $G_AUTH_FILE

Next, we'll need to create the Namespaces we'll need. Let's take a look at the definition we'll use.

 1  cat ../go-demo-3/k8s/ns.yml

You'll notice that the definition is a combination of a few we used in the previous chapters. It contains three Namespaces.

The go-demo-3-build Namespace is where we'll run Pods from which we'll execute most of the steps of our pipeline. Those Pods will contain the tools like kubectl, helm, and Go compiler. We'll use the same Namespace to deploy our releases under test. All in all, the go-demo-3-build Namespace is for short-lived Pods. The tools will be removed when a build is finished, just as installations of releases under test will be deleted when tests are finished executing. This Namespace will be like a trash can that needs to be emptied whenever it gets filled or start smelling.

The second Namespace is go-demo-3. That is the Namespace dedicated to the applications developed by the go-demo-3 team. We'll work only on their primary product, named after the team, but we can imagine that they might be in charge of other application. Therefore, do not think of this Namespace as dedicated to a single application, but assigned to a team. They have full permissions to operate inside that Namespace, just as the others defined in ns.yml. They own them, and go-demo-3 is dedicated for production releases.

While we already used the two Namespaces, the third one is a bit new. The go-demo-3-jenkins is dedicated to Jenkins, and you might wonder why we do not use the jenkins Namespace as we did so far. The answer lies in my belief that it is a good idea to give each team their own Jenkins. That way, we do not need to create an elaborate system with user permissions, we do not need to think whether a plugin desired by one team will break a job owned by another, and we do not need to worry about performance issues when Jenkins is stressed by hundreds or thousands of parallel builds. So, we'll apply "every team gets Jenkins" type of logic. "It's your Jenkins, do whatever you want to do with it," is the message we want to transmit to the teams in our company. Now, if your organization has only twenty developers, there's probably no need for splitting Jenkins into multiple instances. Fifty should be OK as well. But, when that number rises to hundreds, or even thousands, having various Jenkins masters has clear benefits. Traditionally, that would not be practical due to increased operational costs. But now that we are deep into Kubernetes, and that we already saw that a fully functional and configured Jenkins is only a few commands away, we can agree that monster instances do not make much sense. If you are small and that logic does not apply, the processes we'll explore are still the same, no matter whether you have one or a hundred Jenkins masters. Only the Namespace will be different (for example, jenkins).

The rest of the definition is the same as what we used before. We have ServiceAccounts and RoleBindings that allow containers to interact with KubeAPI. We have LimitRanges and ResourceQuotas that protect the cluster from rogue Pods.

The LimitRange defined for the go-demo-3-build Namespace is especially important. We can assume that many of the Pods created through CDP pipeline will not have memory and CPU requests and limits. It's only human to forget to define those things in pipelines. Still, that can be disastrous since it might produce undesired effects in the cluster. If nothing else, that would limit Kubernetes' capacity to schedule Pods. So, defining LimitRange default and defaultRequest entries is a crucial step.

Please go through the whole ns.yml definition to refresh your memory of the things we explored in the previous chapters. We'll apply it once you're back.

 1  kubectl apply 
 2      -f ../go-demo-3/k8s/ns.yml 
 3      --record

Now that we have the Namespaces, the ServiceAccounts, the RoleBindings, the LimitRanges, and the ResourceQuotas, we can proceed and create the secrets and the credentials required by Jenkins.

 1  kubectl -n go-demo-3-jenkins 
 2      create secret generic 
 3      jenkins-credentials 
 4      --from-file cluster/jenkins/credentials.xml
5 6 kubectl -n go-demo-3-jenkins 7 create secret generic 8 jenkins-secrets 9 --from-file cluster/jenkins/secrets

Only one more thing is missing before we install Jenkins. We need to install Tiller in the go-demo-3-build Namespace.

 1  helm init --service-account build 
 2      --tiller-namespace go-demo-3-build
A note to minishift users
Helm will try to install Jenkins chart with the process in a container running as user 0. By default, that is not allowed in OpenShift. We'll skip discussing the best approach to correct the issue, and I'll assume you already know how to set the permissions on the per-Pod basis. Instead, we'll do the most straightforward fix by executing the command that follows that will allow the creation of restricted Pods to run as any user.
oc patch scc restricted -p '{"runAsUser":{"type": "RunAsAny"}}'

Now we are ready to install Jenkins.

 1  JENKINS_ADDR="go-demo-3-jenkins.$LB_IP.nip.io"
 2
 3  helm install helm/jenkins 
 4      --name go-demo-3-jenkins 
 5      --namespace go-demo-3-jenkins 
 6      --values helm/jenkins/values.yaml 
 7      --set jenkins.master.hostName=$JENKINS_ADDR 
 8      --set jenkins.master.DockerVM=$DOCKER_VM 
 9      --set jenkins.master.DockerAMI=$AMI_ID 
10      --set jenkins.master.GProject=$G_PROJECT 
11      --set jenkins.master.GAuthFile=$G_AUTH_FILE

We generated a nip.io address and installed Jenkins in the go-demo-3-jenkins Namespace. Remember, this Jenkins is dedicated to the go-demo-3 team, and we might have many other instances serving the needs of other teams.

A note to minishift users
OpenShift requires Routes to make services accessible outside the cluster. To make things more complicated, they are not part of "standard Kubernetes" so we'll need to create one using oc. Please execute the command that follows.
oc -n go-demo-3-jenkins create route edge --service go-demo-3-jenkins --insecure-policy Allow --hostname $JENKINS_ADDR
That command created an edge router tied to the go-demo-3-jenkins Service. Since we do not have SSL certificates for HTTPS communication, we also specified that it is OK to use insecure policy which will allow us to access Jenkins through plain HTTP. The last argument defined the address through which we'd like to access Jenkins UI.

So far, everything we did is almost the same as what we've done in the previous chapters. The only difference is that we changed the Namespace where we deployed Jenkins. Now, the only thing left before we jump into pipelines is to wait until Jenkins is rolled out and confirm a few things.

 1  kubectl -n go-demo-3-jenkins 
 2      rollout status deployment 
 3      go-demo-3-jenkins

The only thing we'll validate, right now, is whether the node that we'll use to build and push Docker images is indeed connected to Jenkins.

A note to Windows users
Don't forget that open command might not work in Windows and that you might need to replace it with echo, copy the output, and paste it into a tab of your favorite browser.
 1  open "http://$JENKINS_ADDR/computer"

Just as before, we'll need the auto-generated password.

 1  JENKINS_PASS=$(kubectl -n go-demo-3-jenkins 
 2      get secret go-demo-3-jenkins 
 3      -o jsonpath="{.data.jenkins-admin-password}" 
 4      | base64 --decode; echo)
5 6 echo $JENKINS_PASS

Please copy the output of the echo command, go back to the browser, and use it to log in as the admin user.

Once inside the nodes screen, you'll see different results depending on how you set up the node for building and pushing Docker images.

If you are a Docker for Mac or Windows, a minikube user, or a minishift user, you'll see a node called docker-build. That confirms that we successfully connected Jenkins with the VM we created with Vagrant.

If you created a cluster in AWS using kops, you should see a drop-down list called docker-agents.

GKE users should see a drop-down list called docker.

A note to AWS EC2 users
Unlike on-premises and GKE solutions, AWS requires a single manual step to complete the Jenkins setup.
cat cluster/devops24.pem
Copy the output.
open "http://$JENKINS_ADDR/configure"
Scroll to the EC2 Key Pair's Private Key field, and paste the key. Don't forget to click the Apply button to persist the change.

Now that we confirmed that a node (static or dynamic) is available for building and pushing Docker images, we can start designing our first stage of the continuous deployment pipeline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset