Chapter 3. Operators at the Kubernetes Interface

Operators extend two key Kubernetes concepts: resources and controllers. The Kubernetes API includes a mechanism, the CRD, for defining new resources. This chapter examines the Kubernetes objects Operators build on to add new capabilities to a cluster. It will help you understand how Operators fit into the Kubernetes architecture and why it is valuable to make an application a Kubernetes native.

Standard Scaling: The ReplicaSet Resource

Looking at a standard resource, the ReplicaSet, gives a sense of how resources comprise the application management database at the heart of Kubernetes. Like any other resource in the Kubernetes API, the ReplicaSet is a collection of API objects. The ReplicaSet primarily collects pod objects forming a list of the running replicas of an application. The specification of another object type defines the number of those replicas that should be maintained on the cluster. A third object spec points to a template for creating new pods when there are fewer running than desired. There are more objects collected in a ReplicaSet, but these three types define the basic state of a scalable set of pods running on the cluster. Here, we can see these three key pieces for the staticweb ReplicaSet from Chapter 1 (the Selector, Replicas, and Pod Template fields):

$ kubectl describe replicaset/staticweb-69ccd6d6c
Name:           staticweb-69ccd6d6c
Namespace:      default
Selector:       pod-template-hash=69ccd6d6c,run=staticweb
Labels:         pod-template-hash=69ccd6d6c
                run=staticweb
Controlled By:  Deployment/staticweb
Replicas:       1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  pod-template-hash=69ccd6d6c
           run=staticweb
  Containers:
   staticweb:
    Image:        nginx

A standard Kubernetes control plane component, the ReplicaSet controller, manages ReplicaSets and the pods belonging to them. The ReplicaSet controller creates ReplicaSets and continually monitors them. When the count of running pods doesn’t match the desired number in the Replicas field, the ReplicaSet controller starts or stops pods to make the actual state match the desired state.

The actions the ReplicaSet controller takes are intentionally general and application agnostic. It starts new replicas according to the pod template, or deletes excess pods. It does not, should not, and truly cannot know the particulars of startup and shutdown sequences for every application that might run on a Kubernetes cluster.

An Operator is the application-specific combination of CRs and a custom controller that does know all the details about starting, scaling, recovering, and managing its application. The Operator’s operand is what we call the application, service, or whatever resources an Operator manages.

Custom Resources

CRs, as extensions of the Kubernetes API, contain one or more fields, like a native resource, but are not part of a default Kubernetes deployment. CRs hold structured data, and the API server provides a mechanism for reading and setting their fields as you would those in a native resource, by using kubectl or another API client. Users define a CR on a running cluster by providing a CR definition. A CRD is akin to a schema for a CR, defining the CR’s fields and the types of values those fields contain.

CR or ConfigMap?

Kubernetes provides a standard resource, the ConfigMap, for making configuration data available to applications. ConfigMaps seem to overlap with the possible uses for CRs, but the two abstractions target different cases.

ConfigMaps are best at providing a configuration to a program running in a pod on the cluster—think of an application’s config file, like httpd.conf or MySQL’s mysql.cnf. Applications usually want to read such configuration from within their pod, as a file or the value of an environment variable, rather than from the Kubernetes API.

Kubernetes provides CRs to represent new collections of objects in the API. CRs are created and accessed by standard Kubernetes clients, like kubectl, and they obey Kubernetes conventions, like the resources .spec and .status. At their most useful, CRs are watched by custom controller code that in turn creates, updates, or deletes other cluster objects or even arbitrary resources outside of the cluster.

Custom Controllers

CRs are entries in the Kubernetes API database. They can be created, accessed, updated, and deleted with common kubectl commands—but a CR alone is merely a collection of data. To provide a declarative API for a specific application running on a cluster, you also need active code that captures the processes of managing that application.

We’ve looked at a standard Kubernetes controller, the ReplicaSet controller. To make an Operator, providing an API for the active management of an application, you build an instance of the Controller pattern to control your application. This custom controller checks and maintains the application’s desired state, represented in the CR. Every Operator has one or more custom controllers implementing its application-specific management logic.

Operator Scopes

A Kubernetes cluster is divided into namespaces. A namespace is the boundary for cluster object and resource names. Names must be unique within a single namespace, but not between namespaces. This makes it easier for multiple users or teams to share a single cluster. Resource limits and access controls can be applied per namespace. An Operator, in turn, can be limited to a namespace, or it can maintain its operand across an entire cluster.

Note

For details about Kubernetes namespaces, see the Kubernetes namespace documentation.

Namespace Scope

Usually, restricting your Operator to a single namespace makes sense and is more flexible for clusters used by multiple teams. An Operator scoped to a namespace can be upgraded independently of other instances, and this allows for some handy facilities. You can test upgrades in a testing namespace, for example, or serve older API or application versions from different namespaces for compatibility.

Cluster-Scoped Operators

There are some situations where it is desirable for an Operator to watch and manage an application or services throughout a cluster. For example, an Operator that manages a service mesh, such as Istio, or one that issues TLS certificates for application endpoints, like cert-manager, might be most effective when watching and acting on cluster-wide state.

By default, the Operator SDK used in this book creates deployment and authorization templates that limit the Operator to a single namespace. It is possible to change most Operators to run in the cluster scope instead. Doing so requires changes in the Operator’s manifests to specify that it should watch all namespaces in a cluster and that it should run under the auspices of a ClusterRole and ClusterRoleBinding, rather than the namespaced Role and RoleBinding authorization objects. In the next section we give an overview of these concepts.

Authorization

Authorization—the power to do things on the cluster through the API—is defined in Kubernetes by one of a few available access control systems. Role-Based Access Control (RBAC) is the preferred and most tightly integrated of these. RBAC regulates access to system resources according to the role a system user performs. A role is a set of capabilities to take certain actions on particular API resources, such as create, read, update, or delete. The capabilities described by a role are granted, or bound, to a user by a RoleBinding.

Service Accounts

In Kubernetes, regular human user accounts aren’t managed by the cluster, and there are no API resources depicting them. The user identifying you on the cluster comes from some outside provider, which can be anything from a list of users in a text file to an OpenID Connect (OIDC) provider proxying authentication through your Google account.

Note

See the “Users in Kubernetes” documentation for more about Kubernetes service accounts.

Service accounts, on the other hand, are managed by Kubernetes and can be created and manipulated through the Kubernetes API. A service account is a special type of cluster user for authorizing programs instead of people. An Operator is a program that uses the Kubernetes API, and most Operators should derive their access rights from a service account. Creating a service account is a standard step in deploying an Operator. The service account identifies the Operator, and the account’s role denotes the powers granted to the Operator.

Roles

Kubernetes RBAC denies permissions by default, so a role defines granted rights. A common “Hello World” example of a Kubernetes role looks something like this YAML excerpt:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]  1
  verbs: ["get", "watch", "list"]  2
1

The powers granted by this role are effective only on pods.

2

This list permits specific operations on the allowed resources. The verbs comprising read-only access to pods are available to accounts bound with this role.

RoleBindings

A RoleBinding ties a role to a list of one or more users. Those users are granted the permissions defined in the role referenced in the binding. A RoleBinding can reference only those roles in its own namespace. When deploying an Operator restricted to a namespace, a RoleBinding binds an appropriate role to a service account identifying the Operator.

ClusterRoles and ClusterRoleBindings

As discussed earlier, most Operators are confined to a namespace. Roles and RoleBindings are also restricted to a namespace. ClusterRoles and ClusterRoleBindings are their cluster-wide equivalents. A standard, namespaced RoleBinding can reference only roles in its namespace, or ClusterRoles defined for the entire cluster. When a RoleBinding references a ClusterRole, the rules declared in the ClusterRole apply to only those specified resources in the binding’s own namespace. In this way, a set of common roles can be defined once as ClusterRoles, but reused and granted to users in just a given namespace.

The ClusterRoleBinding grants capabilities to a user across the entire cluster, in all namespaces. Operators charged with cluster-wide responsibilities will often tie a ClusterRole to an Operator service account with a ClusterRoleBinding.

Summary

Operators are Kubernetes extensions. We’ve outlined the Kubernetes components used to construct an Operator that knows how to manage the application in its charge. Because Operators build on core Kubernetes concepts, they can make applications meaningfully “Kubernetes native.” Aware of their environment, such applications are able to leverage not just the existing features but the design patterns of the platform in order to be more reliable and less needy. Because Operators politely extend Kubernetes, they can even manage parts and procedures of the platform itself, as seen throughout Red Hat’s OpenShift Kubernetes distribution.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset