Chapter 3. Traffic Control

As we’ve seen in previous chapters, Istio consists of a control plane and a data plane. The data plane is made up of proxies that live in the application architecture. We’ve been looking at a proxy-deployment pattern known as the sidecar, which means each application instance has its own dedicated proxy through which all network traffic travels before it gets to the application. These sidecar proxies can be individually configured to route, filter, and augment network traffic as needed. In this chapter, we look at a handful of traffic control patterns that you can take advantage of via Istio. You might recognize these patterns as some of those practiced by big internet companies like Netflix, Amazon, or Facebook.

Smarter Canaries

The concept of the canary deployment has become fairly popular in the last few years. The name comes from the “canary in the coal mine” concept. Miners used to take an actual bird, a canary in a cage, into the mines to detect whether there were dangerous gases present because canaries are more susceptible to these gases than humans. The canary would not only provide nice musical songs to entertain the miners, but if at any point it collapsed off its perch, the miners knew to get out of the mine quickly.

The canary deployment has similar semantics. With a canary deployment, you deploy a new version of your code to production but allow only a subset of traffic to reach it. Perhaps only beta customers, perhaps only internal employees of your organization, perhaps only iOS users, and so on. After the canary is out there, you can monitor it for exceptions, bad behavior, changes in service-level agreement (SLA), and so forth. If the canary deployment/pod exhibits no bad behavior, you can begin to slowly increase end-user traffic to it. If it exhibits bad behavior, you can easily pull it from production. The canary deployment allows you to deploy faster but with minimal disruption should a “bad” code change make it through your automated QA tests in your continous deployment pipeline.

By default, Kubernetes offers out-of-the-box round-robin load balancing of all the pods behind a Kubernetes Service. If you want only 10% of all end-user traffic to hit your newest pod, you must have at least a 10-to-1 ratio of old pods to the new pod. With Istio, you can be much more fine-grained. You can specify that only 2% of traffic, across only three pods be routed to the latest version. Istio will also let you gradually increase overall traffic to the new version until all end users have been migrated over and the older versions of the app logic/code can be removed from the production environment.

Traffic Routing

With Istio, you can specify routing rules that control the traffic to a set of pods. Specifically, Istio uses DestinationRule and VirtualService resources to describe these rules. The following is an example of a DestinationRule that establishes which pods make up a specific subset:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: recommendation
  namespace: tutorial
spec:
  host: recommendation
  subsets:
  - labels:
      version: v1
    name: version-v1
  - labels:
      version: v2
    name: version-v2

And an example VirtualService that directs traffic using a subset and a weighting factor:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - route:
    - destination:
        host: recommendation
        subset: version-v1
      weight: 100

This VirtualService definition allows you to configure a percentage of traffic and direct it to a specific version of the recommendation service. In this case, 100% of traffic for the recommendation service will always go to pods matching the label version: v1. The selection of pods here is very similar to the Kubernetes selector model for matching based on labels. So, any service within the service mesh that tries to communicate with the recommendation service will always be routed to v1 of the recommendation service.

The routing behavior just described is not just for ingress traffic; that is, traffic coming into the mesh. This is for all inter-service communication within the mesh. As we’ve illustrated in the example, these routing rules apply to services potentially deep within a service call graph. If you have a service deployed to Kubernetes that’s not part of the service mesh, it will not see these rules and will adhere to the default Kubernetes load-balancing rules.

Routing to Specific Versions of a Deployment

To illustrate more complex routing, and ultimately what a canary rollout would look like, let’s deploy v2 of our recommendation service. First, you need to make some changes to the source code for the recommendation service. Change the RESPONSE_STRING_FORMAT in the com.redhat.developer.demos.recommendation.RecommendationVerticle to include “v2”:

private static final String RESPONSE_STRING_FORMAT =
  "recommendation v2 from '%s': %d
";

Now do a build and package of this code as v2:

cd recommendation/java/vertx

mvn clean package

docker build -t example/recommendation:v2 .

You can quickly test your code change by executing your fat jar:

java -jar target/recommendation.jar

And then in another terminal curl your endpoint:

curl localhost:8080
recommendation v2 from 'unknown': 1

Finally, inject the Istio sidecar proxy and deploy this into Kubernetes:

oc apply -f <(istioctl kube-inject -f 
../../kubernetes/Deployment-v2.yml) -n tutorial

You can run oc get pods to see the pods as they all come up; it should look like this when all the pods are running successfully:

NAME                           READY   STATUS    RESTARTS   AGE
customer-3600192384-fpljb      2/2     Running   0          17m
preference-243057078-8c5hz     2/2     Running   0          15m
recommendation-v1-60483540     2/2     Running   0          12m
recommendation-v2-99634814     2/2     Running   0          15s

At this point, if you curl the customer endpoint, you should see traffic load balanced across both versions of the recommendation service. You should see something like this:

#!/bin/bash

while true
do curl customer-tutorial.$(minishift ip).nip.io
sleep .1
done

customer => preference => recommendation v1 from '60483540': 29
customer => preference => recommendation v2 from '99634814': 1
customer => preference => recommendation v1 from '60483540': 30
customer => preference => recommendation v2 from '99634814': 2
customer => preference => recommendation v1 from '60483540': 31
customer => preference => recommendation v2 from '99634814': 3

Now you can create your first DestinationRule and VirtualService to route all traffic to only v1 of the recommendation service. You should navigate to the root of the source code you cloned, to the main istio-tutorial directory, and run the following command:

oc -n tutorial create -f 
istiofiles/destination-rule-recommendation-v1-v2.yml

oc -n tutorial create -f 
istiofiles/virtual-service-recommendation-v1.yml

Now if you try to query the customer service, you should see all traffic routed to v1 of the service:

customer => preference => recommendation v1 from '60483540': 32
customer => preference => recommendation v1 from '60483540': 33
customer => preference => recommendation v1 from '60483540': 34

The VirtualService has created routes to a subset defined by the DestinationRule that includes only the recommendation pod with the label “v1”.

Canary release of recommendation v2

Now that all traffic is going to v1 of your recommendation service, you can initiate a canary release using Istio. The canary release should take 10% of the incoming live traffic. To do this, specify a weighted routing rule in a VirtualService that looks like this:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - route:
    - destination:
        host: recommendation
        subset: version-v1
      weight: 90
    - destination:
        host: recommendation
        subset: version-v2
      weight: 10

As you can see, you’re sending 90% of the traffic to v1 and 10% of the traffic to v2 with this VirtualService. Try replacing the previous recommendation VirtualService and see what happens when you put load on the service:

oc -n tutorial replace -f 
istiofiles/virtual-service-recommendation-v1_and_v2.yml

If you start sending load against the customer service like in the previous steps, you should see that only a fraction of traffic actually makes it to v2. This is a canary release. Monitor your logs, metrics, and tracing systems to see whether this new release has introduced any negative or unexpected behaviors into your system.

Continue rollout of recommendation v2

At this point, if no bad behaviors have surfaced, you should have a bit more confidence in the v2 of our recommendation service. You might then want to increase the traffic to v2: do this by replacing the VirtualService definition with one that looks like the following:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - route:
    - destination:
        host: recommendation
        subset: version-v1
      weight: 50
    - destination:
        host: recommendation
        subset: version-v2
      weight: 50

With this VirtualService we’re going to open the traffic up to 50% to v1, and 50% to v2. When you create this rule using oc, you should use the replace command:

oc -n tutorial replace -f 
istiofiles/virtual-service-recommendation-v1_and_v2_50_50.yml

Now you should see traffic behavior change in real time, with approximately half the traffic going to v1 of the recommendation service and half to v2. It should look something like the following:

customer => ... => recommendation v1 from '60483540': 192
customer => ... => recommendation v2 from '99634814': 37
customer => ... => recommendation v2 from '99634814': 38
customer => ... => recommendation v1 from '60483540': 193
customer => ... => recommendation v2 from '99634814': 39
customer => ... => recommendation v2 from '99634814': 40
customer => ... => recommendation v2 from '99634814': 41
customer => ... => recommendation v1 from '60483540': 194

Finally, if everything continues to look good with this release, you can switch all the traffic to go to v2 of the recommendation service. You need to install the VirtualService that routes all traffic to v2:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - route:
    - destination:
        host: recommendation
        subset: version-v2
      weight: 100

You can replace the rule like this:

oc -n tutorial replace -f 
istiofiles/virtual-service-recommendation-v2.yml

Now you should see all traffic going to v2 of the recommendation service:

customer => preference => recommendation v2 from '99634814': 43
customer => preference => recommendation v2 from '99634814': 44
customer => preference => recommendation v2 from '99634814': 45
customer => preference => recommendation v2 from '99634814': 46
customer => preference => recommendation v2 from '99634814': 47
customer => preference => recommendation v2 from '99634814': 48

Restore to default behavior

To clean up this section, delete the VirtualService for recommendation and you will see the default behavior of Kubernetes 50/50 round-robin load balancing:

oc delete virtualservice/recommendation -n tutorial

Routing Based on Headers

You’ve seen how you can use Istio to do fine-grained routing based on service metadata. You also can use Istio to do routing based on request-level metadata. For example, you can use matching predicates to set up specific route rules based on requests that match a specified set of criteria. For example, you might want to split traffic to a particular service based on geography, mobile device, or browser. Let’s see how to do that with Istio.

With Istio, you can use a match clause in the VirtualService to specify a predicate. For example, take a look at the following VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  creationTimestamp: null
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - match:
    - headers:
        baggage-user-agent:
          regex: .*Safari.*
    route:
    - destination:
        host: recommendation
        subset: version-v2
  - route:
    - destination:
        host: recommendation
        subset: version-v1

This rule uses a request header–based matching clause that will match only if the request includes “Safari” as part of the user-agent header. If the request matches the predicate, it will be routed to v2 of the recommendation service.

Install the rule:

oc -n tutorial create -f 
istiofiles/virtual-service-safari-recommendation-v2.yml

And let’s try it out:

curl customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v1 from '60483540': 465

If you pass in a user-agent header of Safari, you should be routed to v2:

curl -H 'User-Agent: Safari' 
customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v2 from '99634814': 318

Also try Firefox:

curl -A Firefox customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v1 from '60483540': 465
Note

If you test with real browsers, just be aware that Chrome on macOS reports itself as Safari.

Istio’s DestinationRule and VirtualService objects (or Kinds) are declared as CustomResourceDefinitions; therefore you can interact with them like any built-in object type (i.e., Deployment, Pod, Service, ReplicaSet). Try the following commands to explore your Istio objects related to recommendation:

oc get crd | grep virtualservice

kubectl describe destinationrule recommendation -n tutorial

oc get virtualservice recommendation -o yaml -n tutorial
Note

oc and kubectl can basically be used interchangeably as oc is a superset of kubectl with some additional features related to login, project, and new-app that provide some shortcuts to typical Kubernetes/OpenShift operations.

Cleaning up rules

After getting to this section, you can clean up all of the Istio objects you’ve installed:

oc delete virtualservice recommendation -n tutorial
oc delete destinationrule recommendation -n tutorial

Dark Launch

Dark launch can mean different things to different people. In essence, a dark launch is a deployment to production that is invisible to customers. In this case, Istio allows you to duplicate or mirror traffic to a new version of your application and see how it behaves compared to the live application pod. This way you’re able to put production quality requests into your new service without affecting any live traffic.

For example, you could say recommendation v1 takes the live traffic and recommendation v2 will be your new deployment. You can use Istio to mirror traffic that goes to v1 into the v2 pod. When Istio mirrors traffic, it does so in a fire-and-forget manner. In other words, Istio will do the mirroring asynchronously from the critical path of the live traffic, send the mirrored request to the test pod, and not worry about or care about a response. Let’s try this out.

The first thing you should do is make sure that no DestinationRule or VirtualService is currently being used:

oc get destinationrules -n tutorial
No resources found.
oc get virtualservices -n tutorial
No resources found.

Let’s take a look at a VirtualService that configures mirroring:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: recommendation
  namespace: tutorial
spec:
  hosts:
  - recommendation
  http:
  - route:
    - destination:
        host: recommendation
        subset: version-v1
    mirror:
      host: recommendation
      subset: version-v2

You can see that this directs all traffic to v1 of recommendation, and in the mirror clause, you specify which host and subset to receive the mirrored traffic.

Next, verify you’re in the root directory of the source files you cloned from the Istio Tutorial and run the following commands:

oc -n tutorial create -f 
istiofiles/destination-rule-recommendation-v1-v2.yml

oc -n tutorial create -f 
istiofiles/virtual-service-recommendation-v1-mirror-v2.yml

In one terminal, tail the logs for the recommendation v2 service:

oc -n tutorial 
logs -f `oc get pods|grep recommendation-v2|awk
                                            '{ print $1 }'` 
-c recommendation

You can also use stern as another way to see logs of both recommendation v1 and v2:

stern recommendation

In another window, you can send in a request:

curl customer-tutorial.$(minishift ip).nip.io

customer => preference => recommendation v1 from '60483540': 466

You can see from the response that we’re hitting v1 of the recommendation service as expected. If you observe in your tailing of the v2 logs, you’ll also see new entries as it’s processing the mirrored traffic.

Mirrored traffic can be used for powerful pre-release testing, but it does come with challenges. For example, a new version of a service might still need to communicate with a database or other collaborator services. For dealing with data in a microservices world, take a look at Edson Yanaga’s book Migrating to Microservice Databases (O’Reilly). For a more detailed treatment on advanced mirroring techniques, see Christian’s blog post, “Advanced Traffic-Shadowing Patterns for Microservices with Istio Service Mesh”.

Make sure to clean up your DestinationRule and VirtualService before moving along in these samples:

oc delete virtualservice recommendation -n tutorial
oc delete destinationrule recommendation -n tutorial

Egress

By default, Istio directs all traffic originating in a service through the Istio proxy that’s deployed alongside the service. This proxy evaluates its routing rules and decides how best to deliver the request. One nice thing about the Istio service mesh is that by default it blocks all outbound (outside of the cluster) traffic unless you specifically and explicitly create rules to allow traffic out. From a security standpoint, this is crucial. You can use Istio in both zero-trust networking architectures as well as traditional perimeter-based security. In both cases, Istio helps protect against a nefarious agent gaining access to a single service and calling back out to a command-and-control system, thus allowing an attacker full access to the network. By blocking any outgoing access by default and allowing routing rules to control not only internal traffic but any and all outgoing traffic, you can make your security posture more resilient to outside attacks irrespective of where they originate.

You can test this concept by shelling into one of your pods and simply running a curl command:

oc get pods -n tutorial
NAME                            READY  STATUS    RESTARTS  AGE
customer-6564ff969f-jqkkr       2/2    Running   0         19m
preference-v1-5485dc6f49-hrlxm  2/2    Running   0         19m
recommendation-v1-60483540      2/2    Running   0         20m
recommendation-v2-99634814      2/2    Running   0         7m
oc exec -it recommendation-v2-99634814 /bin/bash

[jboss@recommendation-v2-99634814 ~]$ curl -v now.httpbin.org
* About to connect() to now.httpbin.org port 80 (#0)
*   Trying 54.174.228.92...
* Connected to now.httpbin.org (54.174.228.92) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: now.httpbin.org
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Sun, 02 Dec 2018 20:01:43 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host now.httpbin.org left intact
[jboss@recommendation-v2-99634814 ~]$ exit

You will receive a 404 Not Found back from now.httpbin.org.

To address this issue you need to make egress to httpbin.org accessible with a ServiceEntry. Here is the one we are about to apply:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: httpbin-egress-rule
  namespace: tutorial
spec:
  hosts:
  - now.httpbin.org
  ports:
  - name: http-80
    number: 80
    protocol: http

But first make sure to have your DestinationRule set up and then apply the ServiceEntry:

oc -n tutorial create -f 
istiofiles/destination-rule-recommendation-v1-v2.yml

oc -n tutorial create -f 
istiofiles/service-entry-egress-httpbin.yml -n tutorial

Now when you shell into your pod and run curl you get back a correct 200 response:

oc exec -it recommendation-v2-99634814 /bin/bash

[jboss@recommendation-v2-99634814 ~]$ curl now.httpbin.org
{"now": {"epoch": 1543782418.7876487...

You can list the egress rules like this:

oc get serviceentry -n tutorial
SERVICE-ENTRY NAME    HOSTS            PORTS     NAMESPACE  AGE
httpbin-egress-rule   now.httpbin.org  http/80   tutorial   5m

For more, visit the constantly updated tutorial that accompanies this book at Istio Tutorial for Java Microservices. There we have a Java-based example where we modify the recommendation service so that it attempts a call to now.httpbin.org, so you can see the same behavior from a programming perspective.

Finally, clean up your Istio artifacts and return to normal Kubernetes behavior by simply ensuring there are no DestinationRule, VirtualService, or ServiceEntry objects deployed:

oc delete serviceentry httpbin-egress-rule -n tutorial
oc delete virtualservice recommendation -n tutorial
oc delete destinationrule recommendation -n tutorial

The relatively simple examples provided in this chapter are a solid starting point for your own exploration of Istio’s Traffic Management capabilities.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset