Configuring nodes

Node is the slave in the Kubernetes cluster. In order to let master take a node under its supervision, node installs an agent called kubelet for registering itself to a specific master. After registering, daemon kubelet also handles container operations and reports resource utilities and container statuses to the master. The other daemon running on the node is kube-proxy, which manages TCP/UDP packets between containers. In this section, we will show you how to configure a node.

Getting ready

Since node is the worker of Kubernetes and the most important duty is running containers, you have to make sure that Docker and flanneld are installed at the beginning. Kubernetes relies on Docker helping applications to run in containers. And through flanneld, the pods on separated nodes can communicate with each other.

After you have installed both the daemons, according to the file /run/flannel/subnet.env, the network interface docker0 should be underneath the same LAN as flannel0:

# cat /run/flannel/subnet.env
FLANNEL_SUBNET=192.168.31.1/24
FLANNEL_MTU=8973
FLANNEL_IPMASQ=true

// check the LAN of both flanneld0 and docker0
# ifconfig docker0 ; ifconfig flannel0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.31.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:6e:b9:a7:51  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
flannel0: flags=81<UP,POINTOPOINT,RUNNING>  mtu 8973
        inet 192.168.31.0  netmask 255.255.0.0  destination 192.168.11.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

If docker0 is in a different CIDR range, you may take the following service scripts as a reference for a reliable Docker service setup:

# cat /etc/sysconfig/docker
# /etc/sysconfig/docker
#
# Other arguments to pass to the docker daemon process
# These will be parsed by the sysv initscript and appended
# to the arguments list passed to docker -d, or docker daemon where docker version is 1.8 or higher

. /run/flannel/subnet.env 

other_args="--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}"
DOCKER_CERT_PATH=/etc/docker

Alternatively, by way of systemd, the configuration also originally handles the dependency:

$ cat /etc/systemd/system/docker.service.requires/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld -etcd-endpoints=${FLANNEL_ETCD} -etcd-prefix=${FLANNEL_ETCD_KEY} $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

[Install]
RequiredBy=docker.service

$ cat /run/flannel/docker
DOCKER_OPT_BIP="--bip=192.168.31.1/24"
DOCKER_OPT_MTU="--mtu=8973"
DOCKER_NETWORK_OPTIONS=" --bip=192.168.31.1/24 --mtu=8973 "

Once you have modified the Docker service script to a correct one, stop the Docker service, clean its network interface, and start it again.

For more details on the flanneld setup and Docker integration, please refer to the recipe Creating an overlay network.

You can even configure a master to the node; just install the necessary daemons.

How to do it…

Once you verify that Docker and flanneld are good to go on your node host, continue to install the Kubernetes package for the node. We'll cover both RPM and tarball setup.

Installation

This will be the same as the Kubernetes master installation, Linux OS having the command line tool yum, the package management utility, can easily install the node package. On the other hand, we are also able to install the latest version through downloading a tarball file and copy binary files to the specified system directory, which is suitable for every Linux distribution. You can try either of the solutions for your deployment.

CentOS 7 or Red Hat Enterprise Linux 7

  1. First, we will install the package kubernetes-node, which is what we need for the node:
    // install kubernetes node package
    $ yum install kubernetes-node
    

    The package kubernetes-node includes two daemon processes, kubelet and kube-proxy.

  2. We need to modify two configuration files to access the master node:
    # cat /etc/kubernetes/config
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    KUBE_MASTER="--master=<master endpoint>:8080"
    
  3. In the configuration file, we will change the master location argument to the machine's URL/IP, where you installed master. If you specified another exposed port for the API server, remember to update it as well, instead of port 8080:
    # cat /etc/kubernetes/kubelet
    ###
    # kubernetes kubelet (node) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # The port for the info server to serve on
    # KUBELET_PORT="--port=10250"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname_override=127.0.0.1"
    
    # location of the api-server
    KUBELET_API_SERVER="--api_servers=<master endpoint>:8080"
    
    # Add your own!
    KUBELET_ARGS=""
    

    We open the kubelet address for all the interfaces and attached master location.

  4. Then, it is good to start services using the command systemd. There is no dependency between kubelet and kube-proxy:
    // start services
    # systemctl start kubelet
    # systemctl start kube-proxy
    // enable services for starting automatically while server boots up.
    # systemctl enable kubelet
    # systemctl enable kube-proxy
    // check the status of services
    # systemctl status kubelet
    # systemctl status kube-proxy
    

Other Linux options

  1. We can also download the latest Kubernetes binary files and write a customized service init script for node configuration. The tarball of Kubernetes' latest updates will be released at https://github.com/kubernetes/kubernetes/releases:
    // download Kubernetes package
    # curl -L -O https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.1.2/kubernetes.tar.gz
    
    // extract the tarball to specific local, here we put it under /opt. the KUBE_HOME would be /opt/kubernetes
    # tar zxvf kubernetes.tar.gz -C /opt/
    
    // copy all binary files to system directory
    # cp /opt/kubernetes/server/bin/* /usr/local/bin/
    
  2. Next, a file named kubernetes-node is created under /etc/init.d with the following content:
    # cat /etc/init.d/kubernetes-node
    #!/bin/bash
    #
    # kubernetes    This shell script takes care of starting and stopping kubernetes
    
    # Source function library.
    . /etc/init.d/functions
    
    # Source networking configuration.
    . /etc/sysconfig/network
    
    prog=/usr/local/bin/hyperkube
    lockfile=/var/lock/subsys/`basename $prog`
    MASTER_SERVER="<master endpoint>"
    hostname=`hostname`
    logfile=/var/log/kubernetes.log
    
  3. Be sure to provide the master URL/IP for accessing the Kubernetes API server. If you're trying to install a node package on the master host as well, which means make master also work as a node, the API server should work on the local host. If so, you can attach localhost or 127.0.0.1 at <master endpoint>:
    start() {
        # Start daemon.
        echo $"Starting kubelet: "
        daemon $prog kubelet 
            --api_servers=http://${MASTER_SERVER}:8080 
            --v=2 
            --address=0.0.0.0 
            --enable_server 
            --hostname_override=${hostname} 
            > ${logfile}_kubelet 2>&1 &
    
        echo $"Starting proxy: "
        daemon $prog proxy 
            --master=http://${MASTER_SERVER}:8080 
            --v=2 
            > ${logfile}_proxy 2>&1 &
    
        RETVAL=$?
        [ $RETVAL -eq 0 ] && touch $lockfile
        return $RETVAL
    }
    stop() {
        [ "$EUID" != "0" ] && exit 4
            echo -n $"Shutting down $prog: "
        killproc $prog
        RETVAL=$?
            echo
        [ $RETVAL -eq 0 ] && rm -f $lockfile
        return $RETVAL
    }
  4. The following lines are for general daemon management, attaching them in the script to get the functionalities:
    # See how we were called.
    case "$1" in
      start)
        start
        ;;
      stop)
        stop
        ;;
      status)
        status $prog
        ;;
      restart|force-reload)
        stop
        start
        ;;
      try-restart|condrestart)
        if status $prog > /dev/null; then
            stop
            start
        fi
        ;;
      reload)
        exit 3
        ;;
      *)
        echo $"Usage: $0 {start|stop|status|restart|try-restart|force-reload}"
        exit 2
    esac
  5. Now, you can start the service with the name of your init script:
    # service kubernetes-node start
    

Verification

In order to check whether a node is well-configured, the straightforward way would be to check it from the master side:

// push command at master
# kubelet get nodes
NAME                               LABELS                                                    STATUS
ip-10-97-217-56.sdi.trendnet.org   kubernetes.io/hostname=ip-10-97-217-56.sdi.trendnet.org   Ready

See also

It is also recommended to read the recipes about the architecture of the cluster and system environment. Since the Kubernetes node is like a worker, who receives tasks and listens to the others; they should be built after the other components. It is good for you to get more familiar with the whole system before building up nodes. Furthermore, you can also manage the resource in nodes. Please check the following recipes for more information:

  • Exploring architecture
  • Preparing your environment
  • The Setting resource in nodes recipe in Chapter 7, Advanced Cluster Administration
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset