Updating live containers

For the benefit of containers, we can easily publish new programs by executing the latest image, and reduce the headache of environment setup. But, what about publishing the program on running containers? Using native Docker commands, we have to stop the running containers prior to booting up new ones with the latest images and the same configurations. There is a simple and efficient zero-downtime method to update your program in the Kubernetes system. It is called rolling-update. We will show this solution to you in this recipe.

Getting ready

Rolling-update works on the units of the replication controller. The effect is to create new pods one by one to replace the old one. The new pods in the target replication controller are attached to the original labels. Therefore, if any service exposes this replication controller, it will take over the newly created pods directly.

For a later demonstration, we are going to update a new nginx image. In addition to this, we are going to make sure that nodes get your customized image, pushing it to Docker Hub, the public Docker registry, or private registry.

For example, you can create the image by writing your own Dockerfile:

$ cat Dockerfile
FROM nginx
RUN echo "Happy Programming!" > /usr/share/nginx/html/index.html

In this Docker image, we changed the content of the default index.html page. Then, you can build your image and push it with the following commands:

// push to Docker Hub
$ docker build -t <DOCKERHUB_ACCOUNT>/common-nginx . && docker push <DOCKERHUB_ACCOUNT>/common-nginx
// Or, you can also push to your private docker registry
$ docker build -t <RESITRY_NAME>/common-nginx . && docker push <RESITRY_NAME>/common-nginx

To add nodes' access authentications of the private Docker registry, please take the Working with the private Docker registry recipe in Chapter 5, Building a Continuous Delivery Pipeline, as a reference.

How to do it…

You'll now learn how to publish a Docker image. The following steps will help you successfully publish a Docker image:

  1. At the beginning, create a pair of replication controller and service for rolling-update testing. As shown in the following statement, a replication controller with five replicas will be created. The nginx program exposed port 80 to the container, while the Kubernetes service transferred the port to 8080 in the internal network:
    // Create a replication controller named nginx-rc
    # kubectl run nginx-rc --image=nginx --replicas=5 --port=80 --labels="User=Amy,App=Web,State=Testing"
    replicationcontroller "nginx-rc" created
    // Create a service supporting nginx-rc
    # kubectl expose rc nginx-rc --port=8080 --target-port=80 --name="nginx-service"
    service "nginx-service" exposed
    # kubectl get service nginx-service
    NAME            CLUSTER_IP       EXTERNAL_IP   PORT(S)    SELECTOR                         AGE
    nginx-service   192.168.163.46   <none>        8080/TCP   App=Web,State=Testing,User=Amy   35s
    

    You can evaluate whether the components work fine or not by examining <POD_IP>:80 and <CLUSTER_IP>:8080.

  2. Now, we are good to move on to the container update step! The Kubernetes subcommand rolling-update helps to keep the live replication controller up to date. In the following command, users have to specify the name of the replication controller and the new image. Here, we will use the image that is being uploaded to Docker Hub:
    # kubectl rolling-update nginx-rc --image=<DOCKERHUB_ACCOUNT>/common-nginx
    Created nginx-rc-b6610813702bab5ad49d4aadd2e5b375
    Scaling up nginx-rc-b6610813702bab5ad49d4aadd2e5b375 from 0 to 5, scaling down nginx-rc from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 1
    
  3. You may see that the process is hanging. Because rolling-update will start a single new pod at a time and wait for a period of time; the default is one minute to stop an old pod and create a second new pod. From this idea, while updating, there always is one more pod on the serving, one more pod than the desired state of the replication controller. In this case, there would be six pods. While updating the replication controller, please access another terminal for a brand-new process.
  4. Check the state of the replication controller for more concepts:
    # kubectl get rc
    CONTROLLER                                  CONTAINER(S)      IMAGE(S)                                                   SELECTOR                                                                     REPLICAS   AGE
    nginx-rc                                    nginx-rc          nginx                                                      App=Web,State=Testing,User=Amy,deployment=313da350dea9227b89b4f0340699a388   5          1m
    nginx-rc-b6610813702bab5ad49d4aadd2e5b375   nginx-rc          <DOCKERHUB_ACCOUNT>/common-nginx                                         App=Web,State=Testing,User=Amy,deployment=b6610813702bab5ad49d4aadd2e5b375   1          16s
    
  5. As you will find, the system creates an almost identical replication controller with a postfix name. A new label key deployment is added to both the replication controllers for discriminating. On the other hand, new nginx-rc is attached to the other original labels. Service will also take care of the new pods at the same time:
    // Check service nginx-service while updating
    # kubectl describe service nginx-service
    Name:      nginx-service
    Namespace:    default
    Labels:      App=Web,State=Testing,User=Amy
    Selector:    App=Web,State=Testing,User=Amy
    Type:      ClusterIP
    IP:      192.168.163.46
    Port:      <unnamed>  8080/TCP
    Endpoints:    192.168.15.5:80,192.168.15.6:80,192.168.15.7:80 + 3 more...
    Session Affinity:  None
    No events.
    

    There are six endpoints of pods covered by nginx-service, which is supported by the definition of rolling-update.

  6. Go back to the console running the update process. After it completes the update, you can find procedures as follows:
    Created nginx-rc-b6610813702bab5ad49d4aadd2e5b375
    Scaling up nginx-rc-b6610813702bab5ad49d4aadd2e5b375 from 0 to 5, scaling down nginx-rc from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 1
    Scaling nginx-rc down to 4
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 2
    Scaling nginx-rc down to 3
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 3
    Scaling nginx-rc down to 2
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 4
    Scaling nginx-rc down to 1
    Scaling nginx-rc-b6610813702bab5ad49d4aadd2e5b375 up to 5
    Scaling nginx-rc down to 0
    Update succeeded. Deleting old controller: nginx-rc
    Renaming nginx-rc-b6610813702bab5ad49d4aadd2e5b375 to nginx-rc
    replicationcontroller "nginx-rc" rolling updated
    

    Old nginx-rc is gradually taken out of service by scaling down.

  7. At the final steps of the update, the new replication controller is scaled up to five pods to meet the desired state and replace the old one eventually:
    // Take a look a current replication controller
    // The new label "deployment" is remained after update
    # kubectl get rc nginx-rc
    CONTROLLER   CONTAINER(S)   IMAGE(S)             SELECTOR                                                                     REPLICAS   AGE
    nginx-rc     nginx-rc       <DOCKERHUB_ACCOUNT>/common-nginx   App=Web,State=Testing,User=Amy,deployment=b6610813702bab5ad49d4aadd2e5b375   5          40s
    
  8. Checking service with ClusterIP and port, we can now have all our pods in the replication controller updated:
    # curl 192.168.163.46:8080
    Happy Programming!
    
  9. According to the previous demonstration, it costs about five minutes to publish a new Docker image. It is because the updating time is set to one minute by default for the procedure of scaling up and down. It is possible for you to have a faster or slower pace of update by counting on the tag --update-period. The valid time units are ns, us, ms, s, m, and h. For example, --update-period=1m0s:
    // Try on this one!
    # kubectl rolling-update <REPLICATION_CONTROLLER_NAME> --image=<IMAGE_NAME> --update-period=10s
    

How it works…

In this section, we will discuss rolling-update in detail. How about renewing a replication controller with N seconds as the period of updating? See the following image:

How it works…

The previous image indicates each step of the updating procedure. We may get some important ideas from rolling-update:

  • Each pod in both the replication controllers has a new label, but an unequal value to point out the difference. Besides, the other labels are the same, so service can still cover both the replication controllers by selectors while updating.
  • We would spend # pod in replication controller * update period time for migrating a new configuration.
  • For zero-downtime updating, the total number of pods covered by the service should meet the desired state. For example, in the preceding image, there should be always three pods running at a time for the service.
  • Rolling-update procedure doesn't assure users when the newly created pod, in HAPPY-RC-<HashKey2>, is in running state. That's why we need an update period. After a period of time, N seconds in the preceding case, a new pod should be ready to take the place of an old pod. Then, it is good to terminate one old pod.
  • The period of updating time should be the worst case of the time required by a new pod from pulling an image to running.

There's more…

While doing rolling-update, we may specify the image for a new replication controller. But sometimes, we cannot update the new image successfully. It is because of container's image pull policy.

To update with a specific image, it will be great if users provide a tag so that what version of the image should be pulled is clear and accurate. However, most of the time, the latest one to which users look for and the latest tagged image could be regarded as the same one in local, since they are called the latest as well. Like the command <DOCKERHUB_ACCOUNT>/common-nginx:latest image will be used in this update:

# kubectl rolling-update nginx-rc --image=<DOCKERHUB_ACCOUNT>/common-nginx --update-period=10s

Still, nodes will ignore to pull the latest version of common-nginx if they find an image labeled as the same request. For this reason, we have to make sure that the specified image is always pulled from the registry.

In order to change the configuration, the subcommand edit can help in this way:

# kubectl edit rc <REPLICATION_CONTROLLER_NAME>

Then, you can edit the configuration of the replication controller in the YAML format. The policy of image pulling could be found in the following class structure:

apiVersion: v1
kind: replicationcontroller
spec: 
  template:
    spec:
      containers:
      - name: <CONTAINER_NAME>
        image: <IMAGE_TAG>
                 imagePullPolicy: IfNotPresent
:

The value IfNotPresent tells the node to only pull the image not presented on the local disk. By changing the policy to Always, users will be able to avoid updating failure. It is workable to set up the key-value item in the configuration file. So, the specified image is guaranteed to be the one in the image registry.

See also

Pod is the basic computing unit in the Kubernetes system. You can learn how to use pods even more effectively through the following recipes:

  • Scaling your containers
  • The Moving monolithic to microservices, Integrating with Jenkins, Working with the private Docker registry, and Setting up the Continuous Delivery pipeline recipes in Chapter 5, Building a Continuous Delivery Pipeline
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset