Defining the whole production environment

All the chapters until this one followed the same pattern. We'd learn about a new tool and, from there on, we'd streamline its installation through Gists in all subsequent chapters. As an example, we introduced ChartMuseum a few chapters ago, we learned how to install it, and, from there on, there was no point reiterating the same set of steps in the chapters that followed. Instead, we had the installation steps in Gists. Knowing that, you might be wondering why we did not follow the same pattern now. Why was ChartMuseum excluded from the Gists we're using in this chapter? Why isn't Jenkins there as well? Are we going to install ChartMuseum and Jenkins with a different configuration? We're not. Both will have the same configuration, but they will be installed in a slightly different way.

We already saw the benefits provided by Helm. Among other things, it features a templating mechanism that allows us to customize our Kubernetes definitions. We used requirements.yaml file to create our own Jenkins distribution. Helm requirements are a nifty feature initially designed to provide means to define dependencies of our application. As an example, if we'd create an application that uses Redis database, our application would be defined in templates and Redis would be a dependency defined in requirement.yaml. After all, if the community already has a Chart for Redis, why would we reinvent the wheel by creating our own definitions? Instead, we'd put it as an entry in requirements.yaml. Even though our motivation was slightly different, we did just that with Jenkins. As you might have guessed, dependencies in requirements.yaml are not limited to a single entry. We can define as many dependencies as we need.

We could, for example, create a Chart that would define Namespaces, RoleBindings, and all the other infrastructure-level things that our production environment needs. Such a Chart could treat all production releases as dependencies. If we could do something like that, we could store everything related to production in a single repository. That would simplify the initial installation as well as upgrades of the production applications. Such an approach does not need to be limited to production. There could be another repository for other environments. Testing would be a good example if we still rely on manual tasks in that area.

Since we'd keep those Charts in Git repositories, changes to what constitutes production could be reviewed and, if necessary, approved before they're merged to the master branch. There are indeed other benefits to having a whole environment in a Git repository. I'll leave it to your imagination to figure them out.

The beauty of Helm requirements is that they still allow us to keep the definition of an application in the same repository as the code. If we take our go-demo application as an example, the Chart that defines the application can and should continue residing in its repository. However, a different repository could define all the applications running in the production environment as dependencies, including go-demo. That way, we'll accomplish two things. Everything related to an application, including its chart would be in the same repository without breaking the everything-in-git rule. So far, our continuous deployment pipeline (the one we defined in the previous chapter) breaks that rule. Jenkins was upgrading production releases without storing that information in Git. We had undocumented deployments. While releases under test are temporary and live only for the duration of those automated tests, production releases last longer and should be documented, even if their life-span is also potentially short (until the next commit).

All in all, our next task is to have the whole production environment in a single repository, without duplicating the information already available in repositories where we keep the code and definitions of our applications.

I already created a repository vfarcic/k8s-prod (https://github.com/vfarcic/k8s-prod) that defines a production environment. Since we'll have to make some changes to a few files, our first task is to fork it. Otherwise, I'd need to give you my GitHub credentials so that you could push those changes to my repo. As you can probably guess, that is not going to happen.

Please open vfarcic/k8s-prod in a browser and fork the repository. I'm sure you already know how to do that. If you don't, all you have to do is to click on the Fork button located in the top-right corner and follow the wizard instructions.

Next, we'll clone the forked repository before we explore some of its files.

Please replace [...] with your GitHub username before running the commands that follow.

 1  GH_USER=[...]
2 3 cd ..
4 5 git clone https://github.com/$GH_USER/k8s-prod.git
6 7 cd k8s-prod

We cloned the forked repository and entered into its root directory.

Let's see what we have.

 1  cat helm/Chart.yaml

The output is as follows.

apiVersion: v1
name: prod-env
version: 0.0.1
description: Docker For Mac or Windows Production Environment
maintainers:
- name: Viktor Farcic
  email: [email protected]

The Chart.yaml file is very uneventful, so we'll skip explaining it. The only thing that truly matters is the version.

You might see a different version than the one from the output above. Don't panic! I probably bumped it in one of my tests.

Let's take a look at the requirements.yaml.

 1  cp helm/requirements-orig.yaml 
 2      helm/requirements.yaml
3 4 cat helm/requirements.yaml

We copied the original requirements as a precaution since I might have changed requirements.yaml during one of my experiments.

The output of the latter command is as follows.

dependencies:
- name: chartmuseum
repository: "@stable"
version: 1.6.0
- name: jenkins
repository: "@stable"
version: 0.16.6

We can see that the requirements for our production environments are chartmuseum and jenkins. Both are located in the stable repository (the official Helm repo).

Off course, just stating the requirements is not enough. Our applications almost always require customized versions of both public and private charts.

We already know from the previous chapters that we can leverage values.yaml file to customize charts. The repository already has one, so let's take a quick look.

 1  cat helm/values-orig.yaml

The output is as follows.

chartmuseum:
  env:
    open:
      DISABLE_API: false
      AUTH_ANONYMOUS_GET: true
    secret:
      BASIC_AUTH_USER: admin # Change me!
      BASIC_AUTH_PASS: admin # Change me!
  resources:
    limits:
      cpu: 100m
      memory: 128Mi
    requests:
      cpu: 80m
      memory: 64Mi
  persistence:
    enabled: true
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx"
      ingress.kubernetes.io/ssl-redirect: "false"
      nginx.ingress.kubernetes.io/ssl-redirect: "false"
    hosts:
    - name: cm.acme.com # Change me!
      path: /  
jenkins: master: imageTag: "2.129-alpine" cpu: "500m" memory: "500Mi" serviceType: ClusterIP serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http GlobalLibraries: true installPlugins: - durable-task:1.22 - blueocean:1.7.1 - credentials:2.1.18 - ec2:1.39 - git:3.9.1 - git-client:2.7.3 - github:1.29.2 - kubernetes:1.12.0 - pipeline-utility-steps:2.1.0 - pipeline-model-definition:1.3.1 - script-security:1.44 - slack:2.3 - thinBackup:1.9 - workflow-aggregator:2.5 - ssh-slaves:1.26 - ssh-agent:1.15 - jdk-tool:1.1 - command-launcher:1.2 - github-oauth:0.29 - google-compute-engine:1.0.4 - pegdown-formatter:1.3 ingress: enabled: true annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/proxy-body-size: 50m nginx.ingress.kubernetes.io/proxy-request-buffering: "off" ingress.kubernetes.io/ssl-redirect: "false" ingress.kubernetes.io/proxy-body-size: 50m ingress.kubernetes.io/proxy-request-buffering: "off" hostName: jenkins.acme.com # Change me! CustomConfigMap: true CredentialsXmlSecret: jenkins-credentials SecretsFilesSecret: jenkins-secrets DockerVM: false rbac: install: true

We can see that the values are split into two groups; chartmuseum and jenkins. Other than that, they are almost the same as the values we used in the previous chapters. The only important difference is that both are now defined in the same file and that they will be used as values for the requirements.

I hope that you noticed that the file is named values-orig.yaml instead of values.yaml. I could not predict in advance what will be the address through which you can access the cluster. We'll combine that file with a bit of sed magic to generate values.yaml that contains the correct address.

Next, we'll take a look at the templates of this Chart.

 1  ls -1 helm/templates

The output is as follows.

config.tpl
ns.yaml

The config.tpl file is the same Jenkins configuration template we used before, so there should be no need explaining it. We'll skip it and jump straight into ns.yaml.

 1  cat helm/templates/ns.yaml

The output is as follows.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: build
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: build roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - kind: ServiceAccount name: build
---
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: build namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - kind: ServiceAccount name: build namespace: {{ .Release.Namespace }}

That definition holds no mysteries. It is a very similar one to those we used before. The first two entries provide permissions Jenkins builds need for running in the same Namespace, while the third is meant to allow builds to interact with tiller running kube-system. You can see that through the namespace entry that is set to kube-system, and through the reference to the ServiceAccount in the Namespace where we'll install this Chart.

All in all, this chart is a combination of custom templates meant to provide permissions and a set of requirements that will install the applications our production environment needs. For now, those requirements are only two applications (ChartMuseum and Jenkins), and we are likely going to expand it later with additional ones.

I already mentioned that values-orig.yaml is too generic and that we should update it with the cluster address before we convert it into values.yaml. That's our next mission.

 1  ADDR=$LB_IP.nip.io
2 3 echo $ADDR

We defined the address of the cluster (ADDR) as well as the escaped variant required by ChartMuseum since it uses address as the key, not the value. As you already know from previous chapters, keys cannot contain "special" characters like dots (.).

Now that we have the address of your cluster, we can use sed to modify values-orig.yaml and output the result to values.yaml.

 1  cat helm/values-orig.yaml 
 2      | sed -e "[email protected]@$ADDR@g" 
 3      | tee helm/values.yaml

Later on, we'll use Jenkins to install (or upgrade) the Chart, so we should push the changes to GitHub.

 1  git add .
2 3 git commit -m "Address"
4 5 git push

All Helm dependencies need to be downloaded to the charts directory before they are installed. We'll do that through the helm dependency update command.

 1  helm dependency update helm

The relevant part of the output is as follows.

...
Saving 2 charts
Downloading chartmuseum from repo https://kubernetes-charts.storage.googleapis.com
Downloading jenkins from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts

Don't worry if some of the repositories are not reachable. You might see messages stating that Helm was unable to get an update from local or chartmuseum repositories. Local Helm configuration probably has those (and maybe other) references from previous exercises.

The last lines of the output are essential. We can see that Helm saved two Charts (chartmuseum and jenkins). Those are the Charts we specified as dependencies in requirements.yaml.

We can confirm that the dependencies were indeed downloaded by listing the files in the charts directory.

 1  ls -1 helm/charts

The output is as follows.

chartmuseum-1.6.0.tgz
jenkins-0.16.6.tgz

Now that the dependencies are downloaded and saved to the charts directory, we can proceed and install our full production environment. It consists of only two applications. We'll increase that number soon, and I expect that you'll add other applications you need to your "real" environment if you choose to use this approach.

A note to minishift users
Helm will try to install Jenkins dependency Chart with the process in a container running as user 0. By default, that is not allowed in OpenShift. We'll skip discussing the best approach to correct the issue, and I'll assume you already know how to set the permissions on the per-Pod basis. Instead, we'll do the most straightforward fix by executing the command that follows that will allow the creation of restricted Pods to run as any user.
oc patch scc restricted -p '{"runAsUser":{"type": "RunAsAny"}}'
 1  helm install helm 
 2      -n prod 
 3      --namespace prod

The output, limited to the Pods, is as follows.

...
==> v1/Pod(related)
NAME                               READY  STATUS   RESTARTS  AGE
prod-chartmuseum-68bc575fb7-jgs98  0/1    Pending  0         1s
prod-jenkins-6dbc74554d-gbzp4      0/1    Pending  0         1s
...

We can see that Helm sent requests to Kube API to create all the resources defined in our Chart. As a result, among other resources, we got the Pods which run containers with Jenkins and ChartMuseum.

However, Jenkins will fail to start without the secrets we were using in previous chapters, so we'll create them next.

 1  kubectl -n prod 
 2      create secret generic 
 3      jenkins-credentials 
 4      --from-file ../k8s-specs/cluster/jenkins/credentials.xml
5 6 kubectl -n prod 7 create secret generic 8 jenkins-secrets 9 --from-file ../k8s-specs/cluster/jenkins/secrets

Let's list the Charts running inside the cluster and thus confirm that prod was indeed deployed.

 1  helm ls

The output is as follows.

NAME REVISION UPDATED        STATUS   CHART          NAMESPACE
prod 1        Tue Aug  7 ... DEPLOYED prod-env-0.0.1 prod

Now that we saw that the chart was installed, the only thing left is to confirm that the two applications are indeed running correctly. We won't do real testing of the two applications, but only superficial ones that will give us a piece of mind. We'll start with ChartMuseum.

First, we'll wait for ChartMuseum to roll out (if it didn't already).

 1  kubectl -n prod 
 2      rollout status 
 3      deploy prod-chartmuseum

The output should state that the deployment "prod-chartmuseum" was successfully rolled out.

A note to minishift users
OpenShift ignores Ingress resources so we'll have to create a Route to accomplish the same effect. Please execute the command that follows.
oc -n prod create route edge --service prod-chartmuseum --hostname cm.$ADDR --insecure-policy Allow
 1  curl "http://cm.$ADDR/health"

The output is {"healthy":true}, so ChartMuseum seems to be working correctly.

Next, we'll turn our attention to Jenkins.

 1  kubectl -n prod 
 2      rollout status 
 3      deploy prod-jenkins

Once the deployment "prod-jenkins" is successfully rolled out, we can open it in a browser as a very light validation.

A note to minishift users
OpenShift requires Routes to make services accessible outside the cluster. To make things more complicated, they are not part of "standard Kubernetes" so we'll need to create one using oc. Please execute the command that follows.
oc -n prod create route edge --service prod-jenkins --insecure-policy Allow --hostname jenkins.$ADDR
That command created an edge router tied to the prod-jenkins Service. Since we do not have SSL certificates for HTTPS communication, we also specified that it is OK to use insecure policy which will allow us to access Jenkins through plain HTTP. The last argument defined the address through which we'd like to access Jenkins UI.
 1  JENKINS_ADDR="jenkins.$ADDR"
2 3 open "http://$JENKINS_ADDR"

We'll need the initial admin password to log in. Just as we did it countless times before, we'll fetch it from the secret generated through the Chart.

 1  JENKINS_PASS=$(kubectl -n prod 
 2      get secret prod-jenkins 
 3      -o jsonpath="{.data.jenkins-admin-password}" 
 4      | base64 --decode; echo)
5 6 echo $JENKINS_PASS

Please go back to Jenkins UI in your favorite browser and log in using admin as the username and the output of JENKINS_PASS as the password. If, later on, your Jenkins session expires and you need to log in again, all you have to do is output JENKINS_PASS variable to find the password.

Now that we have the base production environment, we can turn our attention towards defining a continuous delivery pipeline.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset