Kubernetes is a container cluster management tool. Currently, it supports Docker and Rocket. It is an open source project supported by Google, and the project was launched in June 2014 at Google I/O. It supports deployment on various cloud providers such as GCE, Azure, AWS, and vSphere as well as on bare metal. The Kubernetes manager is lean, portable, extensible, and self-healing.
Kubernetes has various important components, as explained in the following list:
The following diagram shows the Kubernetes Master/Minion flow:
Let's get started with Kubernetes cluster deployment on AWS, which can be done by using the config file that already exists in the Kubernetes codebase:
.csv
file will contain an Access Key ID
and Secret Access Key
, which will be used to configure the AWS CLI.$ sudo pip install awscli
$ aws configure AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX AWS Secret Access Key [None]: YYYYYYYYYYYYYYYYYYYYYYYYYYYY Default region name [None]: us-east-1 Default output format [None]: text
$ aws iam create-instance-profile --instance-profile-name Kube
$ aws iam create-role --role-name Test-Role --assume-role-policy-document /root/kubernetes/Test-Role-Trust-Policy.json
A role can be attached to the preceding profile, which will have complete access to EC2 and S3, as shown in the following screenshot:
$ aws iam add-role-to-instance-profile --role-name Test-Role --instance-profile-name Kube
$ export AWS_DEFAULT_PROFILE=Kube
$ export KUBERNETES_PROVIDER=aws; wget -q -O - https://get.k8s.io | bash Downloading kubernetes release v1.1.1 to /home/vkohli/kubernetes.tar.gz --2015-11-22 10:39:18-- https://storage.googleapis.com/kubernetes-release/release/v1.1.1/kubernetes.tar.gz Resolving storage.googleapis.com (storage.googleapis.com)... 216.58.220.48, 2404:6800:4007:805::2010 Connecting to storage.googleapis.com (storage.googleapis.com)|216.58.220.48|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 191385739 (183M) [application/x-tar] Saving to: 'kubernetes.tar.gz' 100%[======================================>] 191,385,739 1002KB/s in 3m 7s 2015-11-22 10:42:25 (1002 KB/s) - 'kubernetes.tar.gz' saved [191385739/191385739] Unpacking kubernetes release v1.1.1 Creating a kubernetes on aws... ... Starting cluster using provider: aws ... calling verify-prereqs ... calling kube-up Starting cluster using os distro: vivid Uploading to Amazon S3 Creating kubernetes-staging-e458a611546dc9dc0f2a2ff2322e724a make_bucket: s3://kubernetes-staging-e458a611546dc9dc0f2a2ff2322e724a/ +++ Staging server tars to S3 Storage: kubernetes-staging-e458a611546dc9dc0f2a2ff2322e724a/devel upload: ../../../tmp/kubernetes.6B8Fmm/s3/kubernetes-salt.tar.gz to s3://kubernetes-staging-e458a611546dc9dc0f2a2ff2322e724a/devel/kubernetes-salt.tar.gz Completed 1 of 19 part(s) with 1 file(s) remaining
kube-up.sh
and, in turn, utils.sh
using the config-default.sh
script, which contains the basic configuration of a K8S cluster with four nodes, as follows:ZONE=${KUBE_AWS_ZONE:-us-west-2a} MASTER_SIZE=${MASTER_SIZE:-t2.micro} MINION_SIZE=${MINION_SIZE:-t2.micro} NUM_MINIONS=${NUM_MINIONS:-4} AWS_S3_REGION=${AWS_S3_REGION:-us-east-1}
t2.micro
running Ubuntu OS. The process takes 5 to 10 minutes, after which the IP addresses of the master and minions get listed and can be used to access the Kubernetes cluster.Kubernetes strays from the default Docker system's networking model. The objective is for each pod to have an IP at a level imparted by the system's administration namespace, which has full correspondence with other physical machines and containers over the system. Allocating IPs per pod unit makes for a clean, retrogressive, and good model where units can be dealt with much like VMs or physical hosts from the point of view of port allotment, system administration, naming, administration disclosure, burden adjustment, application design, and movement of pods from one host to another. All containers in all pods can converse with all other containers in all other pods using their addresses. This also helps move traditional applications to a container-oriented approach.
As every pod gets a real IP address, they can communicate with each other without any need for translation. By making the same configuration of IP addresses and ports both inside as well as outside of the pod, we can create a NAT-less flat address space. This is different from the standard Docker model since there, all containers have a private IP address, which will allow them to be able to access the containers on the same host. But in the case of Kubernetes, all the containers inside a pod behave as if they are on the same host and can reach each other's ports on the localhost. This reduces the isolation between containers and provides simplicity, security, and performance. Port conflict can be one of the disadvantages of this; thus, two different containers inside one pod cannot use the same port.
In GCE, using IP forwarding and advanced routing rules, each VM in a Kubernetes cluster gets an extra 256 IP addresses in order to route traffic across pods easily.
Routes in GCE allow you to implement more advanced networking functions in the VMs, such as setting up many-to-one NAT. This is leveraged by Kubernetes.
This is in addition to the main Ethernet bridge which the VM has; this bridge is termed as the container bridge cbr0
in order to differentiate it from the Docker bridge, docker0
. In order to transfer packets out of the GCE environment from a pod, it should undergo an SNAT to the VM's IP address, which GCE recognizes and allows.
Other implementations with the primary aim of providing an IP-per-pod model are Open vSwitch, Flannel, and Weave.
In the case of a GCE-like setup of an Open vSwitch bridge for Kubernetes, the model where the Docker bridge gets replaced by kbr0
to provide an extra 256 subnet addresses is followed. Also, an OVS bridge (ovs0
) is added, which adds a port to the Kubernetes bridge in order to provide GRE tunnels to transfer packets across different minions and connect pods residing on these hosts. The IP-per-pod model is also elaborated more in the upcoming diagram, where the service abstraction concept of Kubernetes is also explained.
A service is another type of abstraction that is widely used and suggested for use in Kubernetes clusters as it allows a group of pods (applications) to be accessed via virtual IP addresses and gets proxied to all internal pods in a service. An application deployed in Kubernetes could be using three replicas of the same pod, which have different IP addresses. However, the client can still access the application on the one IP address which is exposed outside, irrespective of which backend pod takes the request. A service acts as a load balancer between different replica pods and a single point of communication for clients utilizing this application. Kubeproxy, one of the services of Kubernetes, provides load balancing and uses rules to access the service IPs and redirects them to the correct backend pod.
Now, in the following example, we will be deploying two nginx replication pods (rc-pod
) and exposing them via a service in order to understand Kubernetes networking. Deciding where the application can be exposed via a virtual IP address and which replica of the pod (load balancer) the request is to be proxied to is taken care of by Service Proxy. Please refer to the following diagram for more details:
The following are the steps to deploy the Kubernetes pod:
$ mkdir nginx_kube_example $ cd nginx_kube_example
.yaml
file that will be used to deploy the nginx pods:$ vi nginx_pod.yaml
Copy the following into the file:
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 2 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
kubectl
:$ kubectl create -f nginx_pod.yaml
$ kubectl get pods
The following is the output generated:
NAME READY REASON RESTARTS AGE nginx-karne 1/1 Running 0 14s nginx-mo5ug 1/1 Running 0 14s
To list replication controllers on a cluster, use the kubectl get
command:
$ kubectl get rc
The following is the output generated:
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS nginx nginx nginx app=nginx 2
$ docker ps
The following is the output generated:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1d3f9cedff1d nginx:latest "nginx -g 'daemon of 41 seconds ago Up 40 seconds k8s_nginx.6171169d_nginx-karne_default_5d5bc813-3166-11e5-8256-ecf4bb2bbd90_886ddf56 0b2b03b05a8d nginx:latest "nginx -g 'daemon of 41 seconds ago Up 40 seconds
yaml
file in order to expose the nginx pod on host port 82
:$ vi nginx_service.yaml
Copy the following into the file:
apiVersion: v1 kind: Service metadata: labels: name: nginxservice name: nginxservice spec: ports: # The port that this service should serve on. - port: 82 # Label keys and values that must match in order to receive traffic for this service. selector: app: nginx type: LoadBalancer
kubectl create
command:$kubectl create -f nginx_service.yaml services/nginxservice
$ kubectl get services
The following is the output generated:
NAME LABELS SELECTOR IP(S) PORT(S) kubernetes component=apiserver,provider=kubernetes <none> 192.168.3.1 443/TCP nginxservice name=nginxservice app=nginx 192.168.3.43 82/TCP
http://192.168.3.43:82