Real, production-level applications are difficult. Container-based architectures are often made up of multiple services, each requiring their own configuration and installation process. Maintaining these types of applications, including the individual components and their interactions, is a time-consuming and error-prone process. Operators are designed to reduce the difficulty in this process.
A simple, one–container “Hello World” application isn’t going to provide enough complexity to fully demonstrate what Operators can do. To really help you understand the capabilities of Operators, we need an application that requires multiple Kubernetes resources with configuration values that cross between them to use for demonstration.
In this chapter we introduce the Visitors Site application, which we will use as an example in the following chapters that cover writing Operators. We’ll take a look at the application architecture and how to run the site, as well as the process of installing it through traditional Kubernetes manifests. In the chapters that follow, we’ll create Operators to deploy this application using each of the approaches provided by the Operator SDK (Helm, Ansible, and Go), and explore the benefits and drawbacks of each.
The Visitors Site tracks information about each request to its home page. Each time the page is refreshed, an entry is stored with details about the client, backend server, and timestamp. The home page displays a list of the most recent visits (as shown in Figure 5-1).
While the home page itself is fairly simple, the architecture is what makes this an interesting example for exploring Operators. The Visitors Site is a traditional, three-tier application, consisting of:
A web frontend, implemented in React
A REST API, implemented in Python using the Django framework
A database, using MySQL
As shown in Figure 5-2, each of these components is deployed as a separate container. The flow is simple, with users interacting with the web interface, which itself makes calls to the backend REST API. The data submitted to the REST API is persisted in a MySQL database, which also runs as its own container.
Note that the database does not connect to a persistent volume and stores its data ephemerally. While this isn’t a suitable production solution, for the purposes of this example the important aspects are the deployments and interactions between the containers themselves.
Each component in the Visitors Site requires two Kubernetes resources:
Contains the information needed to create the containers, including the image name, exposed ports, and specific configuration for a single deployment.
A network abstraction across all containers in a deployment. If a deployment is scaled up beyond one container, which we will do with the backend, the service sits in front and balances incoming requests across all of the replicas.
A third resource is used to store the authentication details for the database. The MySQL container uses this secret when it is started, and the backend containers use it to authenticate against the database when making requests.
Additionally, there are configuration values that must be consistent between components. For example, the backend needs to know the name of the database service to connect to. When deploying applications through manifests, awareness of these relationships is required to ensure that the values line up.
In the following manifests, the provided values will produce a working Visitors Site deployment. Each section will highlight specific instances where user intervention was required.
You can find all of the manifests in the book’s GitHub repository.
The secret must be created before the database is deployed, since it is used during the container startup:
apiVersion
:
v1
kind
:
Secret
metadata
:
name
:
mysql-auth
type
:
Opaque
stringData
:
username
:
visitors-user
password
:
visitors-pass
When the database and backend deployments use the secret, it is referred to by this name.
For simplicity in this example, the username and password are defaulted to testing values.
You can find the definition for the secret resource in the database.yaml file in this book’s GitHub repository.
Once the secret is in place, use the following manifest to deploy a MySQL instance into Kubernetes:
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
mysql
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
visitors
tier
:
mysql
template
:
metadata
:
labels
:
app
:
visitors
tier
:
mysql
spec
:
containers
:
-
name
:
visitors-mysql
image
:
"
mysql:5.7
"
imagePullPolicy
:
Always
ports
:
-
name
:
mysql
containerPort
:
3306
protocol
:
TCP
env
:
-
name
:
MYSQL_ROOT_PASSWORD
value
:
password
-
name
:
MYSQL_DATABASE
value
:
visitors_db
-
name
:
MYSQL_USER
valueFrom
:
secretKeyRef
:
name
:
mysql-auth
key
:
username
-
name
:
MYSQL_PASSWORD
valueFrom
:
secretKeyRef
:
name
:
mysql-auth
key
:
password
The deployment name must be unique to the namespace in which it is deployed.
The deployment requires the details of the image to deploy, including its name and hosting repository.
Users must be aware of each port that the image exposes, and must explicitly reference them.
The values used to configure the containers for this specific deployment are passed as environment variables.
The secret provides the values for the database authentication credentials.
Keep in mind the value of the container port, as well as each of the environment variables, as other manifests use these values.
The deployment causes the creation of the MySQL container; however, it does not provide any ingress configuration on how to access it. For that, we will need a service. The following manifest will create a Kubernetes service that provides access to the MySQL deployment:
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
mysql-service
labels
:
app
:
visitors
tier
:
mysql
spec
:
clusterIP
:
None
ports
:
-
port
:
3306
selector
:
app
:
visitors
tier
:
mysql
Similar to the MySQL resources, the backend needs both a deployment and a service. However, whereas the database is standalone, the configuration for the backend relies heavily on the values set for the database. While this isn’t an unreasonable requirement, it falls on the user to ensure that the values are consistent across both resources. A single error could result in the backend not being able to communicate with the database. Here’s the manifest to deploy the backend:
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
visitors-backend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
visitors
tier
:
backend
template
:
metadata
:
labels
:
app
:
visitors
tier
:
backend
spec
:
containers
:
-
name
:
visitors-backend
image
:
"
jdob/visitors-service:1.0.0
"
imagePullPolicy
:
Always
ports
:
-
name
:
visitors
containerPort
:
8000
env
:
-
name
:
MYSQL_DATABASE
value
:
visitors_db
-
name
:
MYSQL_SERVICE_HOST
value
:
mysql-service
-
name
:
MYSQL_USERNAME
valueFrom
:
secretKeyRef
:
name
:
mysql-auth
key
:
username
-
name
:
MYSQL_PASSWORD
valueFrom
:
secretKeyRef
:
name
:
mysql-auth
key
:
password
Each deployment configuration includes the number of containers it should spawn.
These values must be manually checked to ensure they match up with the values set on the MySQL deployment. Otherwise, the backend will not be able to establish a connection to the database.
This value tells the backend where to find the database and must match the name of the MySQL service created previously.
As with the database deployment, the secret provides the authentication credentials for the database.
One of the major benefits of using containerized applications is the ability they give you to individually scale specific components. In the backend deployment shown here, the replicas
field can be modified to scale the backend. The example Operators in the following chapters use a custom resource to expose this replica count as a first-class configuration value of the Visitors Site custom resource. Users do not need to manually navigate to the specific backend deployment as they do when using manifests. The Operator knows how to appropriately use the entered value.
The service manifest looks similar to the one you created for the database:
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
visitors-backend-service
labels
:
app
:
visitors
tier
:
backend
spec
:
type
:
NodePort
ports
:
-
port
:
8000
targetPort
:
8000
nodePort
:
30685
protocol
:
TCP
selector
:
app
:
visitors
tier
:
backend
As with the database service, the port referenced in the service definition must match up with that exposed by the deployment.
In this example, the backend is configured to run through port 30685 on the same IP as Minikube. The frontend uses this port when making backend calls for data. For simplicity, the frontend defaults to using this value, so it does not need to be specified when the frontend is deployed.
The frontend is in a similar position as the backend in the sense that it needs configuration that is consistent with the backend deployment. Once again, it falls on the user to manually verify that these values are consistent in both locations. Here’s the manifest that creates the frontend deployment:
apiVersion
:
apps/v1
kind
:
Deployment
metadata
:
name
:
visitors-frontend
spec
:
replicas
:
1
selector
:
matchLabels
:
app
:
visitors
tier
:
frontend
template
:
metadata
:
labels
:
app
:
visitors
tier
:
frontend
spec
:
containers
:
-
name
:
visitors-frontend
image
:
"
jdob/visitors-webui:1.0.0
"
imagePullPolicy
:
Always
ports
:
-
name
:
visitors
containerPort
:
3000
env
:
-
name
:
REACT_APP_TITLE
value
:
"
Visitors
Dashboard
"
To make the Visitors Site application more interesting, you can override the home page title through an environment variable. The CR you’ll learn how to create in the next chapters will expose it as a value of the Visitors Site, shielding end users from having to know in which deployment to specify the value.
Similar to the MySQL and backend deployments, the following manifest creates a service that provides access to the frontend deployment:
apiVersion
:
v1
kind
:
Service
metadata
:
name
:
visitors-frontend-service
labels
:
app
:
visitors
tier
:
frontend
spec
:
type
:
NodePort
ports
:
-
port
:
3000
targetPort
:
3000
nodePort
:
30686
protocol
:
TCP
selector
:
app
:
visitors
tier
:
frontend
You can run the Visitors Site for yourself using the kubectl
command:
$
kubectl
apply
-f
ch05/database.yaml
secret/mysql-auth
created
deployment.apps/mysql
created
service/mysql-service
created
$
kubectl
apply
-f
ch05/backend.yaml
deployment.apps/visitors-backend
created
service/visitors-backend-service
created
$
kubectl
apply
-f
ch05/frontend.yaml
deployment.apps/visitors-frontend
created
service/visitors-frontend-service
created
Using these manifests, you can find the home page by using the IP address of the Minikube instance and specifying port 30686 in your browser. The minikube
command provides the IP address to access:
$
minikube
ip
192.168.99.100
For this Minikube instance, you can access the Visitors Site by opening a browser and going to http://192.168.99.100:30686.
Clicking refresh a few times will populate the table on that page with details of the internal cluster IP and the timestamp of each request, as previously shown in Figure 5-1.
Similar to deploying the manifests, you delete the created resources using the kubectl
command:
$
kubectl
delete
-f
ch05/frontend.yaml
deployment.apps
"visitors-frontend"
deleted
service
"visitors-frontend-service"
deleted
$
kubectl
delete
-f
ch05/backend.yaml
deployment.apps
"visitors-backend"
deleted
service
"visitors-backend-service"
deleted
$
kubectl
delete
-f
ch05/database.yaml
secret
"mysql-auth"
deleted
deployment.apps
"mysql"
deleted
service
"mysql-service"
deleted
We will use this sample application in the following chapters to demonstrate a variety of technologies on which you can build Operators.
In addition to the Operator implementations, keep in mind the end user experience. In this chapter we demonstrated a manifest-based installation, requiring a number of manual changes and internal references to be made. All of the following Operator implementations create a custom resource definition that acts as the sole API for creating and configuring an instance of the Visitors Site.