Configuring master

The master node of Kubernetes works as the control center of containers. The duties of which are taken charge by the master include serving as a portal to end users, assigning tasks to nodes, and gathering information. In this recipe, we will see how to set up Kubernetes master. There are three daemon processes on master:

  • API Server
  • Scheduler
  • Controller Manager

We can either start them using the wrapper command, hyperkube, or individually start them as daemons. Both the solutions are covered in this section.

Getting ready

Before deploying the master node, make sure you have the etcd endpoint ready, which acts like the datastore of Kubernetes. You have to check whether it is accessible and also configured with the overlay network Classless Inter-Domain Routing (CIDR https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). It is possible to check it using the following command line:

// Check both etcd connection and CIDR setting
$ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config

If connection is successful, but the etcd configuration has no expected CIDR value, you can push value through curl as well:

$ curl -L <etcd endpoint URL>/v2/keys/coreos.com/network/config -XPUT -d value="{ "Network": "<CIDR of overlay network>" }"

Tip

Besides this, please record the following items: the URL of etcd endpoint, the port exposed by etcd endpoint, and the CIDR of the overlay network. You will need them while configuring master's services.

How to do it…

In order to build up a master, we propose the following steps for installing the source code, starting with the daemons and then doing verification. Follow the procedure and you'll get a practical master eventually.

Installation

Here, we offer two kinds of installation procedures:

  • One is a RHEL-based OS with package manager; master daemons are controlled by systemd
  • The other one is for other Linux distributions; we build up master with binary files and service init scripts

CentOS 7 or Red Hat Enterprise Linux 7

  1. RHEL 7, CentOS 7, or later have an official package for Kubernetes. You can install them via the yum command:
    // install Kubernetes master package
    # yum install kubernetes-master kubernetes-client
    

    The kubernetes-master package contains master daemons, while kubernetes-client installs a tool called kubectl, which is the Command Line Interface for communicating with the Kubernetes system. Since the master node is served as an endpoint for requests, with kubectl installed, users can easily control container applications and the environment through commands.

    Note

    CentOS 7's RPM of Kubernetes

    There are five Kubernetes RPMs (the .rpm files, https://en.wikipedia.org/wiki/RPM_Package_Manager) for different functionalities: kubernetes, kubernetes-master, kubernetes-client, kubernetes-node, and kubernetes-unit-test.

    The first one, kubernetes, is just like a hyperlink to the following three items. You will install kubernetes-master, kubernetes-client, and kubernetes-node at once. The one named kubernetes-node is for node installation. And the last one, kubernetes-unit-test contains not only testing scripts, but also Kubernetes template examples.

  2. Here are the files after yum install:
    // profiles as environment variables for services
    # ls /etc/kubernetes/
    apiserver  config  controller-manager  scheduler
    // systemd files
    # ls /usr/lib/systemd/system/kube-*
    /usr/lib/systemd/system/kube-apiserver.service           /usr/lib/systemd/system/kube-scheduler.service
    /usr/lib/systemd/system/kube-controller-manager.service
    
  3. Next, we will leave the systemd files as the original ones and modify the values in the configuration files under the directory /etc/kubernetes to build a connection with etcd. The file named config is a shared environment file for several Kubernetes daemon processes. For basic settings, simply change items in apiserver:
    # cat /etc/kubernetes/apiserver
    ###
    # kubernetes system config
    #
    # The following values are used to configure the kube-apiserver
    #
    
    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    
    # The port on the local server to listen on.
    KUBE_API_PORT="--insecure-port=8080"
    
    # Port nodes listen on
    # KUBELET_PORT="--kubelet_port=10250"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd_servers=<etcd endpoint URL>:<etcd exposed port>"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=<CIDR of overlay network>"
    
    # default admission control policies
    KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
    
    # Add your own!
    KUBE_API_ARGS="--cluster_name=<your cluster name>"
    
  4. Then, start the daemon kube-apiserver, kube-scheduler, and kube-controller-manager one by one; the command systemctl can help for management. Be aware that kube-apiserver should always start first, since kube-scheduler and kube-controller-manager connect to the Kubernetes API server when they start running:
    // start services
    # systemctl start kube-apiserver
    # systemctl start kube-scheduler
    # systemctl start kube-controller-manager
    // enable services for starting automatically while server boots up.
    # systemctl enable kube-apiserver
    # systemctl enable kube-scheduler
    # systemctl enable kube-controller-manager
    

Adding daemon dependency

  1. Although systemd does not return error messages without the API server running, both kube-scheduler and kube-controller-manager get connection errors and do not provide regular services:
    $ sudo systemctl status kube-scheduler -l—output=cat kube-scheduler.service - Kubernetes Scheduler Plugin
       Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled)
       Active: active (running) since Thu 2015-11-19 07:21:57 UTC; 5min ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 2984 (kube-scheduler)
       CGroup: /system.slice/kube-scheduler.service
               └─2984 /usr/bin/kube-scheduler—logtostderr=true—v=0 --master=127.0.0.1:8080
    E1119 07:27:05.471102    2984 reflector.go:136] Failed to list *api.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse: dial tcp 127.0.0.1:8080: connection refused
    
  2. Therefore, in order to prevent the starting order to affect performance, you can add two settings under the section of systemd.unit in /usr/lib/systemd/system/kube-scheduler and /usr/lib/systemd/system/kube-controller-manager:
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=kube-apiserver.service
    Wants=kube-apiserver.service

    With the preceding settings, we can make sure kube-apiserver is the first started daemon.

  3. Furthermore, if you expect the scheduler and the controller manager to always be running along with a healthy API server, which means if kube-apiserver is stopped, kube-scheduler and kube-controller-manager will be stopped as well; you can change systemd.unit item Wants to Requires, as follows:
    Requires=kube-apiserver.service

    Requires has more strict restrictions. In case the daemon kube-apiserver has crashed, kube-scheduler and kube-controller-manager would also be stopped. On the other hand, configuration with Requires is hard for debugging master installation. It is recommended that you enable this parameter once you make sure every setting is correct.

Other Linux options

It is also possible that we download a binary file for installation. The official website for the latest release is here: https://github.com/kubernetes/kubernetes/releases:

  1. We are going to install the version tagged as Latest release and start all the daemons with the wrapper command hyperkube:
    // download Kubernetes package
    # curl -L -O https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz
    
    // extract the tarball to specific local, here we put it under /opt. the KUBE_HOME would be /opt/kubernetes
    # tar zxvf kubernetes.tar.gz -C /opt/
    
    // copy all binary files to system directory
    # cp /opt/kubernetes/server/bin/* /usr/local/bin/
    
  2. The next step is to create a startup script (init), which would cover three master daemons and start them individually:
    # cat /etc/init.d/kubernetes-master
    #!/bin/bash
    #
    # This shell script takes care of starting and stopping kubernetes master
    
    # Source function library.
    . /etc/init.d/functions
    
    # Source networking configuration.
    . /etc/sysconfig/network
    
    prog=/usr/local/bin/hyperkube
    lockfile=/var/lock/subsys/`basename $prog`
    hostname=`hostname`
    logfile=/var/log/kubernetes.log
    
    CLUSTER_NAME="<your cluster name>"
    ETCD_SERVERS="<etcd endpoint URL>:<etcd exposed port>"
    CLUSTER_IP_RANGE="<CIDR of overlay network>"
    MASTER="127.0.0.1:8080"
    
  3. To manage your Kubernetes settings more easily and clearly, we will put the declaration of changeable variables at the beginning of this init script. Please double-check the etcd URL and overlay network CIDR to confirm that they are the same as your previous installation:
    start() {
    
      # Start daemon.
      echo $"Starting apiserver: "
      daemon $prog apiserver 
      --service-cluster-ip-range=${CLUSTER_IP_RANGE} 
      --port=8080 
      --address=0.0.0.0 
      --etcd_servers=${ETCD_SERVERS} 
      --cluster_name=${CLUSTER_NAME} 
      > ${logfile}_apiserver 2>&1 &
    
      echo $"Starting controller-manager: "
      daemon $prog controller-manager 
      --master=${MASTER} 
      > ${logfile}_controller-manager 2>&1 &
    
      echo $"Starting scheduler: "
      daemon $prog scheduler 
      --master=${MASTER} 
      > ${logfile}_scheduler 2>&1 &
    
      RETVAL=$?
      [ $RETVAL -eq 0 ] && touch $lockfile
      return $RETVAL
    }
    
    stop() {
      [ "$EUID" != "0" ] && exit 4
            echo -n $"Shutting down $prog: "
      killproc $prog
      RETVAL=$?
            echo
      [ $RETVAL -eq 0 ] && rm -f $lockfile
      return $RETVAL
    }
  4. Next, feel free to attach the following lines as the last part in the script for general service usage:
    # See how we were called.
    case "$1" in
      start)
      start
      ;;
      stop)
      stop
      ;;
      status)
      status $prog
      ;;
      restart|force-reload)
      stop
      start
      ;;
      try-restart|condrestart)
      if status $prog > /dev/null; then
          stop
          start
      fi
      ;;
      reload)
      exit 3
      ;;
      *)
      echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}"
      exit 2
    esac
  5. Now, it is good to start the service named kubernetes-master:
    $sudo service kubernetes-master start
    

Note

At the time of writing this book, the latest version of Kubernetes was 1.1.2. So, we will use 1.1.2 in the examples for most of the chapters.

Verification

  1. After starting all the three daemons of the master node, you can verify whether they are running properly by checking the service status. Both the commands, systemd and service, are able to get the logs:
    # systemd status <service name>
    
  2. For a more detailed log in history, you can use the command journalctl:
    # journalctl -u <service name> --no-pager --full
    

    Once you find a line showing Started... in the output, you can confirm that the service setup has passed the verification.

  3. Additionally, the dominant command in Kubernetes, kubectl, can begin the operation:
    // check Kubernetes version
    # kubectl version
    Client Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"}
    Server Version: version.Info{Major:"1", Minor:"0.3", GitVersion:"v1.0.3.34+b9a88a7d0e357b", GitCommit:"b9a88a7d0e357be2174011dd2b127038c6ea8929", GitTreeState:"clean"}
    

See also

From the recipe, you know how to create your own Kubernetes master. You can also check out the following recipes:

  • Exploring architecture
  • Configuring nodes
  • The Building multiple masters recipe in Chapter 4, Building a High Availability Cluster
  • The Building the Kubernetes infrastructure in AWS recipe in Chapter 6, Building Kubernetes on AWS
  • The Authentication and authorization recipe in Chapter 7, Advanced Cluster Administration
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset