Chapter 1. Introduction

If you are looking for an introduction into the world of Istio, the service mesh platform, with detailed examples, this is the book for you. This book is for the hands-on application architect and development team lead focused on cloud native applications based on the microservices architectural style. This book assumes that you have had hands-on experience with Docker, and while Istio will be available on multiple Linux container orchestration solutions, the focus of this book is specifically targeted at Istio on Kubernetes/OpenShift. Throughout this book, we will use the terms Kubernetes and OpenShift interchangeably. (OpenShift is Red Hat’s supported distribution of Kubernetes.)

If you need an introduction to Java microservices covering Spring Boot and Thorntail (formerly known as WildFly Swarm), check out Microservices for Java Developers, by Christian Posta (O’Reilly).

Also, if you are interested in reactive microservices, an excellent place to start is Building Reactive Microservices in Java, by Clement Escoffier(O’Reilly), as it is focused on Vert.x, a reactive toolkit for the Java Virtual Machine.

In addition, this book assumes that you have a comfort level with Kubernetes/OpenShift; if that is not the case, OpenShift for Developers, by Grant Shipley and Graham Dumpleton (O’Reilly), is an excellent ebook on that very topic. We will be deploying, interacting, and configuring Istio through the lens of OpenShift; however, the commands we’ll use are mostly portable to vanilla Kubernetes as well.

To begin, we discuss the challenges that Istio can help developers solve and then describe Istio’s primary components.

The Challenge of Going Faster

The software development community, in the era of digital transformation, has embarked on a relentless pursuit of better serving customers and users. Today’s digital creators—application programmers—have not only evolved into faster development cycles based on Agile principles, but are also in pursuit of vastly faster deployment times. Although the monolithic code base and resulting application might be deployable at the rapid clip of once a month or even once a week, it is possible to achieve even greater “to production” velocity by breaking up the application into smaller units with smaller team sizes, each with its independent workflow, governance model, and deployment pipeline. The industry has defined this approach as microservices architecture.

Much has been written about the various challenges associated with microservices as it introduces many teams, for the first time, to the fallacies of distributed computing. The number one fallacy is that the “network is reliable.” Microservices communicate significantly over the network—the connection between your microservices. This is a fundamental change to how most enterprise software has been crafted over the past few decades. When you add a network dependency to your application logic, you have invited in a whole host of potential hazards that grow proportionally if not exponentially with the number of connections your application depends on.

Understandably, new challenges arise in moving from a single deployment every few months to (potentially) dozens of software deployments every week or even every day.

Some of the big web companies had to develop special frameworks and libraries to help alleviate some of the challenges of an unreliable network, ephemeral cloud hosts, and many code deployments per day. For example, companies like Netflix created projects like Ribbon, Hystrix, and Eureka to solve these types of problems. Others such as Twitter and Google ended up doing similar things. These frameworks that they created were very language and platform specific and, in some cases, made it difficult to bring in new application services written in programming languages that didn’t have support from these resilience frameworks. Whenever these frameworks were updated, the applications also needed to be updated to stay in lock step. Finally, even if they created an implementation of these frameworks for every possible permutation of language runtime, they’d have massive overhead in trying to apply the functionality consistently. At least in the Netflix example, these libraries were created in a time when the virtual machine (VM) was the main deployable unit and they were able to standardize on a single cloud platform plus a single application runtime, the Java Virtual Machine. Most companies cannot and will not do this.

The advent of the Linux container (e.g., Docker) and Kubernetes/OpenShift have been fundamental enablers for DevOps teams to achieve vastly higher velocities by focusing on the immutable image that flows quickly through each stage of a well-automated pipeline. How development teams manage their pipeline is now independent of the language or framework that runs inside the container. OpenShift has enabled us to provide better elasticity and overall management of a complex set of distributed, polyglot workloads. OpenShift ensures that developers can easily deploy and manage hundreds, if not thousands, of individual services. Those services are packaged as containers running in Kubernetes pods complete with their respective language runtime (e.g., Java Virtual Machine, CPython, and V8) and all their necessary dependencies, typically in the form of language-specific frameworks (e.g., Spring or Express) and libraries (e.g., jars or npms). However, OpenShift does not get involved with how each of the application components, running in their individual pods, interact with one another. This is the crossroads where architects and developers find ourselves. The tooling and infrastructure to quickly deploy and manage polyglot services is becoming mature, but we’re missing similar capabilities when we talk about how those services interact. This is where the capabilities of a service mesh such as Istio allow you, the application developer, to build better software and deliver it faster than ever before.

Meet Istio

Istio is an implementation of a service mesh. A service mesh is the connective tissue between your services that adds additional capabilities like traffic control, service discovery, load balancing, resilience, observability, security, and so on. A service mesh allows applications to offload these capabilities from application-level libraries and allows developers to focus on differentiating business logic. Istio has been designed from the ground up to work across deployment platforms, but it has first-class integration and support for Kubernetes.

Like many complementary open source projects within the Kubernetes ecosystem, Istio is a Greek nautical term that means “sail”—much like Kubernetes itself is the Greek term for “helmsman” or “ship’s pilot”. With Istio, there has been an explosion of interest in the concept of the service mesh—where Kubernetes/OpenShift has left off is where Istio begins. Istio provides developers and architects with vastly richer and declarative service discovery and routing capabilities. Where Kubernetes/OpenShift itself gives you default round-robin load balancing behind its service construct, Istio allows you to introduce unique and finely grained routing rules among all services within the mesh. Istio also provides us with greater observability, that ability to drill down deeper into the network topology of various distributed microservices, understanding the flows (tracing) between them and being able to see key metrics immediately.

If the network is in fact not always reliable, that critical link between and among our microservices needs to not only be subjected to greater scrutiny but also applied with greater rigor. Istio provides us with network-level resiliency capabilities such as retry, timeout, and implementing various circuit-breaker capabilities.

Istio also gives developers and architects the foundation to delve into a basic exploration of chaos engineering. In Chapter 5, we describe Istio’s ability to drive chaos injection so that you can see how resilient and robust your overall application and its potentially dozens of interdependent microservices actually are.

Before we begin that discussion, we want to ensure that you have a basic understanding of Istio. The following section will provide you with an overview of Istio’s essential components.

Understanding Istio Components

The Istio service mesh is primarily composed of two major areas: the data plane and the control plane, depicted in Figure 1-1.

iimm 0101
Figure 1-1. Data plane versus control plane

Data Plane

The data plane is implemented in such a way that it intercepts all inbound (ingress) and outbound (egress) network traffic. Your business logic, your app, your microservice is blissfully unaware of this fact. Your microservice can use simple framework capabilities to invoke a remote HTTP endpoint (e.g., Spring RestTemplate or JAX-RS client) across the network and mostly remain ignorant of the fact that a lot of interesting cross-cutting concerns are now being applied automatically. Figure 1-2 describes your typical microservice before the advent of Istio.

iimm 0102
Figure 1-2. Before Istio

The data plane for Istio service mesh is made up the istio-proxy running as a sidecar container, as shown in Figure 1-3.

iimm 0103
Figure 1-3. With Envoy sidecar (istio-proxy)

Let’s explore each concept.

Service proxy

A service proxy augments an application service. The application service calls through the service proxy any time it needs to communicate over the network. The service proxy acts as an intermediary or interceptor that can add capabilities like automatic retries, circuit breaker, service discovery, security, and more. The default service proxy for Istio is based on Envoy proxy.

Envoy proxy is a layer 7 (L7) proxy (see the OSI model on Wikipedia) developed by Lyft, the ridesharing company, which currently uses it in production to handle millions of requests per second. Written in C++, it is battle-tested, highly performant, and lightweight. It provides features like load balancing for HTTP1.1, HTTP2, and gRPC. It has the ability to collect request-level metrics, trace spans, provide for service discovery, inject faults, and much more. You might notice that some of the capabilities of Istio overlap with Envoy. This fact is simply explained as Istio uses Envoy for its implementation of these capabilities.

But how does Istio deploy Envoy as a service proxy? Istio brings the service proxy capabilities as close as possible to the application code through a deployment technique known as the sidecar.

Sidecar

When Kubernetes/OpenShift were born, they did not refer to a Linux container as the runnable/deployable unit as you might expect. Instead, the name pod was born, and it is the primary thing to manage in a Kubernetes/OpenShift world. Why pod? Some think it is an obscure reference to the 1956 film Invasion of the Body Snatchers, but it is actually based on the concept of a family or group of whales. The whale was the early image associated with the Docker open source project—the most popular Linux container solution of its era. So, a pod can be a group of Linux containers. The sidecar is yet another Linux container that lives directly alongside your business logic application or microservice container. Unlike a real-world sidecar that bolts onto the side of a motorcycle and is essentially a simple add-on feature, this sidecar can take over the handlebars and throttle.

With Istio, a second Linux container called “istio-proxy” (aka the Envoy service proxy) is manually or automatically injected into the pod that houses your application or microservice. This sidecar is responsible for intercepting all inbound (ingress) and outbound (egress) network traffic from your business logic container, which means new policies can be applied that reroute the traffic (in or out), perhaps apply policies such as access control lists (ACLs) or rate limits, also snatch monitoring and tracing data (Mixer), and even introduce a little chaos such as network delays or HTTP errors.

Control Plane

The control plane is responsible for being the authoritative source for configuration and policy and making the data plane usable in a cluster potentially consisting of hundreds of pods scattered across a number of nodes. Istio’s control plane comprises three primary Istio services: Pilot, Mixer, and Citadel.

Pilot

The Pilot is responsible for managing the overall fleet—all of your microservices’ sidecars running across your Kubernetes/OpenShift cluster. The Istio Pilot ensures that each of the independent microservices, wrapped as individual Linux containers and running inside their pods, has the current view of the overall topology and an up-to-date “routing table.” Pilot provides capabilities like service discovery as well as support for VirtualService. The VirtualService is what gives you fine-grained request distribution, retries, timeouts, etc. We cover this in more detail in Chapters 3 and 4.

Mixer

As the name implies, Mixer is the Istio service that brings things together. Each of the distributed istio proxies delivers its telemetry back to Mixer. Furthermore, Mixer maintains the canonical model of the usage and access policies for the overall suite of microservices. With Mixer, you can create policies, apply rate-limiting rules, and even capture custom metrics. Mixer has a pluggable backend architecture that is rapidly evolving with new plug-ins and partners that are extending Mixer’s default capabilities in many new and interesting ways. Many of the capabilities of Mixer fall beyond the scope of this introductory book, but we do address observability in Chapter 6, and security in Chapter 7.

Citadel

The Istio Citadel component, formerly known as Istio CA or Auth, is responsible for certificate signing, certificate issuance, and revocation/rotation. Istio issues X.509 certificates to all your microservices, allowing for mutual Transport Layer Security (mTLS) between those services, encrypting all their traffic transparently. It uses identity built into the underlying deployment platform and builds that into the certificates. This identity allows you to enforce policy. An example of setting up mTLS is discussed in Chapter 7.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset