Hybrid cloud overview and introduction to containers and microservices concepts
In this chapter, we introduce some of the basic concepts of hybrid multi-cloud workloads and especially how IBM Z and IBM LinuxONE play a crucial role in such environments. The basic concepts of microservices and containers and their operational and management frameworks also are discussed.
We also discuss the path of the use of containerization and how it plays a vital role in the hybrid multi-cloud strategy.
This chapter includes the following topics:
1.1 Executive summary
The evolution of the IT environment continues with the transformation of organizations across the globe. This evolution is moving toward a rapid adoption of software solutions in hybrid and multi-cloud environments to achieve competitive advantages and ensure customer satisfaction. 
The application landscape experiences flexibility by adopting hybrid application development and containerization of microservices and integrating it with traditional applications, with focus on the requirements for secured multi-cloud service landscapes.
Many applications are deployed in a private cloud because of security and compliance aspects, data affinity, and performance requirements. The IT organizations that are responsible for operating the private cloud value simplicity, agility, flexibility, security, and cost efficiency. These features reduce their own barriers to innovation as part of their overall hybrid strategy. A significant focus is on IT optimization toward a better use and operation of dynamic transactional workloads and the use of statistical and AI capabilities for the hybrid data.
The IBM Z and IBM LinuxONE platforms are suited for these hybrid multi-cloud environments. They host cloud services that can take advantage of the capability of nondisruptive vertical and horizontal scalability on-demand on the most securable platform by inheriting the reliability, stability, and availability of the mainframe. In colocation with traditional workloads, such as IBM z/OS® data or services, they can enable data gravity in secure computing environments and fulfill the business and IT requirements of today and tomorrow.
IBM positions Red Hat OpenShift Container Platform as the foundational Platform as a Service (PaaS) technology for providing IT as a Service (ITaaS). It also positions the rapid provisioning and lifecycle management of containerized applications and associated application and infrastructure services for cloud users, such as software developers, data scientists, and solution architects. 
Red Hat OpenShift Container Platform is the only solution that runs across multiple hardware architectures, such as IBM Z, IBM Power, x86, Arm, and Microsoft. It can run as a private cloud implementation or combined with public clouds. It implements the hybrid capability of running mission-critical applications where they fit best and most effective.
The value of Red Hat OpenShift Container Platform features the following key characteristics:
Implements an enterprise Kubernetes platform for container workloads
Enables seamless Kubernetes deployments on any cloud or on-premises environments
Features an integrated and automated installation
Performs seamless platform and application updates
Auto-scales resources and services
Integrates tools for development consistency experience
Runs enterprise workloads with enterprise CI/CD services, across multiple deployments
With the capability to run Red Hat OpenShift Container Platform on IBM Z and IBM LinuxONE, you can take full advantage of the following platform capabilities and characteristics:
Nondisruptive growth with vertical and horizontal scalability that helps to accommodate substantial increase of workload on-demand and unpredictable peak requests.
Highest scalability for millions of containers and thousands of Linux guests in one physical machine. For more information, see this IBM Newsroom web page.
Containerized workload within a state of the art microservices architecture, while accessing existing transactional z/OS or Linux on IBM Z services and databases.
Advantage of highest security and confidential cloud computing, taking advantage of FIPS 140-2 Level 4 certification of the IBM Z Cryptographic accelerators. For more information, see this web page.
Multi tenancy with full LPAR isolation (EAL5+) allows administrators to share a single hardware securely. Even virtual machines (VMs) on the same hardware offer EAL4 certification.
1.1.1 Why you should use Red Hat OpenShift Container Platform
Uses of Red Hat OpenShift environment on IBM Z and LinuxONE span multiple industries and show the advantages of deploying a private cloud solution with programmable Infrastructure as a Service (IaaS), Containers as a Service (CaaS), and PaaS capabilities.
Major uses include the following the examples:
Data gravity
A key reason how the Red Hat OpenShift Container Platform can take advantage of the IBM Z and IBM LinuxONE platform is the colocation of containerized applications with traditional workloads, such as databases, transactional systems, or other traditional workloads running in Linux on IBM Z or z/OS. In such a scenario, the applications can be located close to the data to optimize latency, response time, deployment, security, service, and cost. 
In recent projects, clients experienced a double digit factor improvement for colocated Red Hat OpenShift environments on IBM Z versus public clouds or distributed environments. The reason is the short communication path to traditional data and services because of the advantage of internal networks in IBM Z, accelerated secure requests, and the effective virtualization for a demanded scalability in the application and service responses. 
Consistent development experience
An effective development and deployment capability is highly appreciated in the enterprises and Red Hat OpenShift Container Platform enabled it along with the possibility to automatically deploy the containers on multiple platforms simultaneously.  
The software components and Red Hat OpenShift Container Platform Add-ons enable a consistent development experience across multiple hardware architectures and platforms. Therefore, the development can be done by using the following Red Hat OpenShift Container Platform internal tools and components that make it easier to build the containers for a solution:
 – Red Hat OpenShift do (odo) is a command-line tool that enables coding, building, and debugging the application.
 – Red Hat OpenShift CodeReady Workspaces is the capability Red Hat OpenShift Container Platform offers with an integrated development environment in a browser with a graphical front end.
 – Visual Studio with the odo plug-in enables the development in visual Studio for Red Hat OpenShift.
In addition to these development tools, you can take advantage of Red Hat OpenShift ServiceMesh to secure communications between your microservices, without changing the code.
Another useful add-on is Red Hat OpenShift Serverless, which allows you take advantage of the serverless concepts for auto-scaling and an economic use of resources of an application.
To automate the process for Continuous Integration (CI) and Continuous Delivery (CD), Red Hat OpenShift Pipelines that are based on the Tekton technology can be used.
The overall capability and experience of a common development for hybrid solutions is highly appreciated by users.         
Consolidation and business resiliency
When consolidating a Red Hat OpenShift Container Platform environment from x86 platforms to IBM Z and IBM LinuxONE, you can achieve economic and operational advantages. Most of the three-dimension scalability (vertical, horizontal, and combined) results in high flexibility without the need for a new hardware footprint, scalability on-demand, or granular capacity shift as needed. This key advantage bodes well for dynamic workloads and unpredicted growth.
For business resiliency, IBM Z and IBM LinuxONE provides per-design characteristics, such as an internal network, with significantly more reliability based on a higher degree of redundancy in the hardware. Because virtualization occurs within a single hardware environment, the networking traffic is more predictable with less latency compared to x86 environments. 
Building a disaster recovery (DR) setup with IBM Z and IBM LinuxONE is much simpler in such an environment because fewer hardware units exist that must be managed. Most of all, you can take advantage of current Extended Disaster Recovery xDR solutions for IBM Z and IBM LinuxONE based on Geographically Dispersed IBM Parallel Sysplex® (IBM GDPS®).
In summary, the use cases can bring in key benefits to the business by using Red Hat OpenShift Container Platform on IBM Z and IBM LinuxONE.  
Red Hat OpenShift Container Platform is the only offering that is running on multiple different on-premises hardware platforms and public clouds (see Figure 1-1). It unites the capabilities of Hybrid Multi-cloud environments, managed from a single point of control with its multi-cloud management capabilities. 
Figure 1-1 Red Hat OpenShift Container Platform
1.2 Architectures and tools that are used in this book
In this section, we introduce some of the architectures and tools that are used in this book.
1.2.1 Microservices
On their journey of digitalization and hybrid cloud service establishment, enterprises are focusing on building smaller individual software components that enable faster time to market for new services and a more flexible and granular scalability. The architecture of these components is often based on microservices concepts.
Microservices is an application architectural style in which an application is composed of many, individually separate, and distinct network-connected components.
An application based on microservices architecture can scale granularity without the need to scale the entire application. This ability enhances the economics of resources and compute capacity.  
The implementation of a microservice can vary from a development language and data usability perspective. It also extends the flexibility per microservice to different run times, used data, and communication capabilities between microservices.  
Figure 1-2 shows a microservices architecture.
Figure 1-2 Microservices application
Microservices architectures are an important software trend, one that can have profound implications on enterprise IT and the digital transformation of an entire business.
Digital transformation requires organizations to adopt accelerated innovation methods that enable the delivery of new digital services to customers. Monolithic applications might be operationally acceptable, but these applications are not suited for building digital services.
Traditional monolithic architecture and software development methods remain a stumbling block for driving digital transformation.
To efficiently drive digital transformation, organizations are exploring a new software development methods and architecture (called cloud-based microservices architecture) whereby IT solutions can be organized around granular business capabilities that can be rapidly assembled to create cloud-based digital experience applications.
A comparison of monolithic architectures and microservices is shown in Figure 1-3.
Figure 1-3 Monolithic versus microservices
With LinuxONE unique attributes that are inherited by solutions that are built with Red Hat OpenShift, enterprises can use the combination of hardware features and containerization to create and manage a new generation of portable distributed applications that are composed of discrete, interoperable containers.
By combining Red Hat OpenShift and IBM LinuxONE, we get enterprise-grade automation, orchestration, management, visibility, and control capabilities for the application development and deployment process. In this process, an application can include software components that are hosted in bare metal, VMs, or containers, all fortified by the reliability, security, and scalability of IBM LinuxONE Systems.
1.2.2 Containers
The most efficient way to implement and manage microservices is their implementation as application containers, which can interact with each other through light-weight protocols and use well-defined APIs.
A container is a layered approach of a file system that is built starting with a Container Base Image. This image typically is delivered by way of a Linux distribution, such as the Red Hat Universal Base Image (UBI). Container tools are used by the developer to install software components in the container that appear as layers in the container.
By committing a container, it becomes a read-only container image, which is similar to a golden image that is known from virtualized environments. Container images include the information or libraries that are required during execution on the operating system to interface with the container run time and the kernel operating system.
A container abstracts application code from the underlying infrastructure and simplifies version management. It also enables portability across various deployment environments and profits from high application isolation.
Containerized applications can be composed of several container images. A container image is a read-only file that often is in a local container image repository or a local or remote container image registry.
Multiple containers can run in a single operating system and scale individually; therefore, they represent an evolution toward flexibility and scalability with fewer resource requirements as compared to a VM.
The concept of containerization is shown in Figure 1-4.
Figure 1-4 Containerization concepts
To build and manage containers, container engines were developed. To run a container, a run time is needed in the operating system where the containers are to run. 
Figure 1-5 shows the container run time.
Figure 1-5 Container environment
Starting with Docker, containers became easy to build and use. Shortly after Docker, many different companies developed tools and built an ecosystem around container development and run times. 
This diversification led to the Open Container Initiative (OCI). The OCI defines the standards for container image format and the Container Runtime Interface (CRI).
The OCI is continuously growing, which demonstrates that containerization is evolving rapidly in IT.
1.2.3 Kubernetes
An application can be composed of multiple containers and software solutions and often reach hundreds of containers. Therefore, it became obvious that an effective way of managing containerized applications and solutions is to create specialized tools for container orchestration.
The market for container orchestration tools converged on an Open Source tool (Kubernetes). 
Many companies are contributing to the functions and enhancement of Kubernetes and defined the quasi-standard for container orchestrations.
Kubernetes is developed with application high availability by design. It is deployed with three Master Nodes for a high availability quorum and several worker nodes, depending on the container workload.
The container workload runs in pods, which represent the smallest entity in a Kubernetes environment. One or multiple containers can be in a pod, as shown in Figure 1-6.   
Figure 1-6 A Kubernetes cluster
1.2.4 Containers versus virtualization
In this section, we explain the difference between containers and virtualization.
Containers
Containers make available protected portions of the operating system; that is, they effectively virtualize the operating system. Two containers that are running on the same operating system are unaware that they are sharing resources because each has its own abstracted networking layer, processes, and so on.
A comparison of containerization to virtualization is shown in Figure 1-7.
Figure 1-7 Containers versus virtualization
Containers are a lightweight, efficient, and standard way for applications to move between environments and run independently. Everything that is needed (except for the shared operating system on the server) to run the application is packaged inside the container object: code, run time, system tools, libraries, and dependencies.
Containerization is the packaging of software code with only the operating system libraries and dependencies that are required to run the code to create a single lightweight executable (called a container) that runs consistently on any infrastructure. More portable and resource-efficient than VMs, containers became the de facto compute units of modern cloud-native applications.
Virtualization
Containers virtualize at the operating system level, whereas hypervisor-based solutions virtualize at the hardware level. Both containers and VMs are virtualization tools.
On the VM side, a hypervisor makes siloed slices of the available hardware. In general, two types of hypervisors are used:
Hardware virtualization runs directly on the bare metal of the hardware.
Software-based virtualization runs as another layer of software within a guest operating system.
With virtualization, much higher levels of workload density can be realized because many more workloads are running on far fewer servers.
IBM LinuxONE provides the best of both virtualization options, providing extreme performance and security, with EAL5+ (hardware-based virtualization) and EAL4+ (software virtualization).
Virtualized workloads and containerized workloads are compared in Figure 1-8.
Figure 1-8 Virtualized workloads versus Containerized workloads
1.2.5 Container orchestration frameworks
Containers include packaged, self-contained, ready-to-deploy parts of applications and, if necessary, middleware and business logic (in binaries and libraries) to run the applications. Tools, such as Docker, are built around container engines where containers act as a portable means to package applications. Applications that are packaged in containers results in the need to manage the lifecycle, security, and dependencies between containers in multitier applications. A container orchestration tool can provide these components, their dependencies, and their lifecycle in a streamlined and secure manner.
Container orchestration frameworks, such as Docker Swarm, Kubernetes, Red Hat OpenShift, and Mesos, build upon and extend container run times with more support for deploying and managing a multi-tiered distributed application as a set of containers on a cluster of nodes. Container orchestration frameworks also increasingly are used to run production grade services because they provide the most important PaaS features.
Red Hat OpenShift is an enterprise open source container orchestration platform. It is a software product that includes components of the Kubernetes container management project, but adds productivity and security features that are important to large-scale companies.
The container orchestration framework is shown in Figure 1-9
Figure 1-9 Container Orchestration framework
The Red Hat OpenShift framework include the following important features:
Cluster architecture and setup
CO system customization
Container
Application configuration and deployment
Resource quota management
Container QoS management
Securing clusters
Securing containers
Application and cluster management
1.3 Red Hat OpenShift architecture
With containerization came the need for more process-driven enhancements because of continuous operation requirements. Continuous Integration (CI) of new developed functions and features without service interruption was also required, and Continuous Deployment (CD) of new enhancements.
With these CI/CD needs, the lifecycle for an application that is now composed of containers became easier and more efficient to manage. At the same time, the integration of development security and operation (DevSecOps) options for the components of those applications came into focus and the DevSecOps process lead to new tools to be integrated, packaged, and made available.       
It is because of this integration that new offerings around containerization evolved and one of the most comprehensive offerings is Red Hat OpenShift (see Figure 1-10). It unifies the establishment of an enterprise grade Kubernetes environment with the surrounding tools for DevSecOps and CI/CD. The various tools and capabilities for development of containerized solutions are integrated with the flexibility of deploying the containers in their own standard Kubernetes environment and enabling the high availability of the applications and lifecycle management of the operating system and the containerized workloads. 
Figure 1-10 Red Hat OpenShift Container Platform architecture
Red Hat OpenShift is the only solution that can run on different hardware architectures. By using multi-cluster management, you can manage hybrid Red Hat OpenShift environments from one place.      
Red Hat OpenShift can fully be deployed on IBM Z and IBM LinuxONE in a virtualized environment.  
Red Hat OpenShift is a trusted Kubernetes enterprise platform that supports modern, hybrid-cloud application development. It also provides a consistent foundation for applications anywhere across physical, virtual, private, and public clouds.
The core architectural pattern that underlies the Red Hat OpenShift container cluster is based on the Master-Workers architecture. This architecture features a Master node that ensures that running applications are always in their wanted state by scheduling containers to the Worker nodes and by monitoring the actual runtime state of nodes and containers. Master nodes use a distributed data store for storing the configuration state about all deployed containers and services.
Each Red Hat OpenShift cluster features two parts: a control plane and worker nodes. Containers run in the worker nodes, each of which has its own operating system. The control plane maintains the cluster’s overall state (such as what applications are running and which container images are used), while worker nodes do the computing work. An overview of the high-level Red Hat OpenShift architecture is shown in Figure 1-11.
Figure 1-11 High-level Red Hat OpenShift architecture
Red Hat CoreOS
To cater for the low overhead requirements that provide faster spin-up time, Red Hat developed a container-optimized OS build: Red Hat CoreOS. Red Hat CoreOS contains minimalistic subsystems for containers to run.
The underlying operating system consists primarily of Red Hat Enterprise Linux components. The same quality, security, and control measures that support Red Hat Enterprise Linux also support Red Hat CoreOS. Although it contains Red Hat Enterprise Linux components, Red Hat CoreOS is managed more tightly than a default Red Hat Enterprise Linux installation. Management is performed remotely from the Red Hat OpenShift Container Platform cluster.
Red Hat CoreOS includes the following key features:
Based on Red Hat Enterprise Linux
Controlled immutability
CRI-O container run time
Set of container tools
Upgrades to rpm-ostree
bootupd firmware and bootloader updater
Updated through the Machine Config Operator
 
Note: For more information about Red Hat CoreOS, see this web page.
1.3.1 Red Hat OpenShift components
Red Hat OpenShift includes everything that you need for hybrid cloud, such as a container run time, networking, monitoring, container registry, authentication, and authorization. Red Hat OpenShift consists of the following important components, and each component has its own responsibilities:
Control plane
Worker nodes
Registry
Storage controller
Control plane
The control plane, which is composed of master nodes, manages the Red Hat OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster manages all upgrades to the machines by the actions of the Cluster Version Operator, the Machine Config Operator, and a set of individual Operators.
The Control Plane with the Controller Manager, Scheduler, API Server, and etcd are shown in Figure 1-12.
Figure 1-12 Control Plane Master nodes
Controller manager
The controller manager implements governance across the cluster and runs a set of controllers for the running cluster. The key controllers are replication controller, endpoint controller, namespace controller, and service account controller. The controller manager runs different kind of controllers to handle nodes, the endpoint, and so on.
API server
The API server acts as the front end to the cluster. All external communications to the cluster are by way of the API Server. The API Server is the only component that directly communicates with the distributed storage component, etcd.
The API Server provides the following core functions:
Serves the Kubernetes API that is used cluster-internally by the worker nodes and externally by kubectl
Proxies cluster components, such as the Kubernetes UI
Allows the manipulation of the state of objects; for example, pods and services
Persists the state of objects in a distributed storage (etcd)
etcd
This cluster state database provides a consistent and highly available key value store, which is used as Kubernetes’ backing store for all cluster data. It is a high availability key value store that can be distributed among multiple nodes. It is accessible only by Kubernetes API server because it might include sensitive information.
Scheduler
The Scheduler primarily schedules activities to the worker nodes based on events that are occurring on the etcd. It also holds the nodes resources plan to determine the suitable action for the triggered event. For example, the Scheduler determines which worker node is to host a newly scheduled POD.
Storage
Kubernetes creates permanent storage mechanisms for containers that are based on Kubernetes persistent volumes (PV) and refers to any resource that applies to the entire cluster that allows users to access data far beyond their pod’s total lifespan.
Red Hat OpenShift Container Platform uses the Kubernetes PV framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure.
Red Hat OpenShift deployment on IBM LinuxONE supports the following storage options:
Network File System (NFS)
IBM Spectrum® Scale
Red Hat OpenShift Container Storage (OCS)
 
Note: For more information about the OpenShift-based storage deployment and options, see this web page.
1.3.2 Pods and ReplicaSets
In this section, we describe pods and ReplicaSets.
Pods
A pod is a collection of containers that are running on the Worker node. A pod is the smallest and simplest Kubernetes object that can be defined, deployed, and managed in a cluster. Each pod is allocated its own internal IP address and shares a network and hostname with PID Namespaces.
Kubernetes pods and their containers are shown in Figure 1-13.
Figure 1-13 Kubernetes pods
ReplicaSets
ReplicaSets are used maintain a stable set of replica pods that are running at any time. A ReplicaSet ensures and guarantees that a specified number of pod replicas are running at any time (see Figure 1-14).
Figure 1-14 Kubernetes Replicasets
Service
A small service is available in each node, which is responsible for relaying information to and from the control plane service. It interacts with the etcd store to read configuration details and values.
The service communicates with the master component to receive commands and work. The kubelet process then assumes responsibility for maintaining the state of work and the node server. It also manages network rules, port forwarding, and so on.
The Kubernetes service is shown in Figure 1-15.
Figure 1-15 Kubernetes service
Worker nodes
Workers nodes are the compute nodes in the Red Hat OpenShift cluster where all application containers are deployed to run. Worker nodes advertise their resources and resource utilization so that the scheduler can allocate containers and pods to worker nodes and maintain a reasonable workload distribution. The Kubernetes CRI-O kubelet service runs on each worker node. This service receives container deployment requests and ensures that they are instantiated and put into operation.
Registry
Red Hat OpenShift inherits the Kubernetes registry. It provides an immediate solution for users to manage the images that run their workloads, and runs on top of the cluster infrastructure. The registry is typically used as a publication target for images that are built on the cluster, and as a source of images for workloads that are running on the cluster.
1.4 Red Hat OpenShift Container Platform introduction
The deployment of Red Hat OpenShift Container Platform on IBM Z or IBM LinuxONE provides the following benefits:
A clustered implementation approach that is aligned with Kubernetes.
A virtualized environment with z/VM or KVM on IBM Z.
The ability to use IBM Z attachments and capabilities.
Vertical scalability versus horizontal scalability.
Infrastructure by way of hypervisors, networks, and so on.
Depending on what your enterprise use case is, you can choose from different topologies and deployment options for Red Hat OpenShift Container Platform, including the following examples:
Deploying Red Hat OpenShift Container Platform as the only cloud environment on IBM Z and IBM LinuxONE.
Deploying Red Hat OpenShift Container Platform with another, noncontainerized workload that you also run as a back-end in a Linux environment on IBM Z and IBM LinuxONE hardware.
Deploying Red Hat OpenShift Container Platform in colocation with a traditional operating and transactional environments and services, such as IBM z/OS, IBM z/VSE®, or z/TPF on IBM Z hardware.
1.4.1 Red Hat OpenShift Container Platform colocated with traditional workloads
Red Hat OpenShift Container Platform is available on IBM Z and IBM LinuxONE, and takes advantage of the underlying capabilities of each, including the following examples:
Ability to scale to thousands of Linux guests and millions of containers.
Nondisruptively grow vertically and horizontally to provide a platform with large scalability.
Enables cloud native applications to easily integrate with data and applications on these platforms, which reduces latency by avoiding network delays.
1.4.2 IBM Cloud Paks for Data Overview
IBM Cloud Paks for Data represents a new way of software delivery for a Red Hat OpenShift environment. IBM is grouping and packaging multiple software solutions in different Cloud Paks. They are delivered in this format to enable the capability for easy deployment, integrated monitoring, and provide a common, consistent, and integrated experience for cloud-native workloads. 
1.5 Use cases overview
To demonstrate throughout this book how you can build an Red Hat OpenShift environment on IBM Z or IBM LinuxONE, we provide the following use cases:   
 
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset