Chapter 13. Example of a Microservices-Based Architecture

This chapter provides an example of an implementation of a microservices-based architecture. It aims at demonstrating concrete technologies in order to lay the foundation for experiments. The example application has a very simple domain architecture containing a few compromises. Section 13.1 deals with this topic in detail.

For a real system with a comparable low complexity as in the presented example application, an approach without microservices would be better suited. However, the low complexity makes the example application easy to understand and simple to extend. Some aspects of a microservice environment, such as security, documentation, monitoring, or logging are not illustrated in the example application—but these aspects can be relatively easily addressed with some experiments.

Section 13.2 explains the technology stack of the example application. The build tools are described in section 13.3. Section 13.4 deals with Docker as a technology for the deployment. Docker needs to run in a Linux environment. Section 13.5 describes Vagrant as a tool for generating such environments. Section 13.6 introduces Docker Machine as alternative tool for the generation of a Docker environment, which can be combined with Docker Compose for the coordination of several Docker containers (section 13.7). The implementation of Service Discovery is discussed in section 13.8. The communication between the microservices and the user interface is the main topic of section 13.9. Thanks to resilience other microservices are not affected if a single microservice fails. In the example application resilience is implemented with Hystrix (section 13.10). Load Balancing (section 13.11), which can distribute the load onto several instances of a microservice, is closely related to that. Possibilities for the integration of non-Java-technologies are detailed in section 13.12, and testing is discussed in section 13.13.

The code of the example application can be found at https://github.com/ewolff/microservice. It is Apache-licensed, and can, accordingly, be used and extended freely for any purpose.

13.1 Domain Architecture

The example application has a simple web interface, with which users can submit orders. There are three microservices (see Figure 13.1):

“Catalog” keeps track of products. Items can be added or deleted.

“Customer” performs the same task in regards to customers: It can register new customers or delete existing ones.

“Order” can not only show orders but also create new orders.

Image

Figure 13.1 Architecture of the Example Application

For the orders the microservice “Order” needs access to the two other micro-services, “Customer” and “Catalog.” The communication is achieved via REST. However, this interface is only meant for the internal communication between the microservices. The user can interact with all three microservices via the HTML-/HTTP-interface.

Separate Data Storages

The data storages of the three microservices are completely separate. Only the respective microservice knows the information about the business objects. The microservice “Order” saves only the primary keys of the items and customers, which are necessary for the access via the REST interface. A real system should use artificial keys as the internal primary keys so that they do not become visible to the outside. These are internal details of the data storage that should be hidden. To expose the primary keys, the class SpringRestDataConfig within the micro-services configures Spring Data REST accordingly.

Lots of Communication

Whenever an order needs to be shown, the microservice “Customer is called for the customer data and the microservice “Catalog” for each line of the order in order to determine the price of the item. This can have a negative influence on the response times of the application as the display of the order cannot take place before all requests have been answered by the other microservices. As the requests to the other services take place synchronously and sequentially, latencies will add up. This problem can be solved by using asynchronous parallel requests.

In addition, a lot of computing power is needed to marshal the data for sending and receiving. This is acceptable in case of such a small example application. When such an application is supposed to run in production, alternatives have to be considered.

This problem can, for instance, be solved by caching. This is relatively easy as customer data will not change frequently. Items can change more often—still, not so fast that caching would pose a problem. Only the amount of data can interfere with this approach. The use of microservices has the advantage that such a cache can be implemented relatively simply at the interface of the microservices, or even at the level of HTTP, if this protocol is used. An HTTP cache, like the one used for websites, can be added to REST services in a transparent manner and without much programming effort.

Bounded Context

Caching will solve the problem of too long response times technically. However, very long response times can also be a sign of a fundamental problem. Section 3.3 argued that a microservice should contain a Bounded Context. A specific domain model is only valid in a Bounded Context. The modularization into microservices in this example contradicts this idea: The domain model is used to modularize the system into the microservices “Order” for orders, “Catalog” for items, and “Customer” for customers. In principle the data of these entities should be modularized in different Bounded Contexts.

The described modularization implements, in spite of low domain complexity, a system consisting of three microservices. In this manner the example application is easy to understand while still having several microservices and demonstrating the communication between microservices. In a real system the microservice “Order” can also handle information about the items that is relevant for the order process such as the price. If necessary, the service can replicate the data from another microservice into its own database in order to access it efficiently. This is an alternative to the aforementioned caching. There are different possibilities how the domain models can be modularized into the different Bounded Contexts “Order,” “Customer,” and “Catalog.”

This design can cause errors: when an order has been put into the system and the price of the item is changed afterwards, the price of the order changes as well, which should not happen. In case the item is deleted, there is even an error when displaying the order. In principle the information concerning the item and the customer should become part of the order. In that case the historical data of the orders including customer and item data would be transferred into the service “Order.”

Don’t Modularize Microservices by Data!

It is important to understand the problem inherent in architecting a microservices system by domain model. Often the task of a global architecture is misunderstood: The team designs a domain model, which comprises, for instance, objects such as customers, orders, and items. Based on this model microservices are defined. That is how the modularization into microservices could have come about in the example application, resulting in a huge amount of communication. A modularization based on processes such as ordering, customer registration, and product search might be more advantageous. Each process could be a Bounded Context that has its own domain model for the most important domain objects. For product search the categories of items might be the most relevant, while for the ordering process, data like weight and size might matter more.

The modularization by data can also be advantageous in a real system. When the microservice “Order” gets too big in combination with the handling of customer and product data, it is sensible to modularize data handling. In addition, the data can be used by other microservices. When devising the architecture for a system, there is rarely a single right way of doing things. The best approach depends on the system and the properties the system should have.

13.2 Basic Technologies

Microservices in the example application are implemented with Java. Basic functionalities for the example application are provided by the Spring Framework.1 This framework offers not only dependency injection, but also a web framework, which enables the implementation of REST-based services.

1. http://projects.spring.io/spring-framework/

HSQL Database

The database HSQLDB handles and stores data. It is an in-memory database, which is written in Java. The database stores the data only in RAM so that all data is lost upon restarting the application. In line with this, this database is not really suited for production use, even if it can write data to a hard disk. On the other hand, it is not necessary to install an additional database server, which keeps the example application easy. The database runs in the respective Java application.

Spring Data REST

The microservices use Spring Data REST2 in order to provide the domain objects with little effort via REST and to write them into the database. Handing objects out directly means that the internal data representation leaks into the interface between the services. Changing the data structures is very difficult as the clients need to be adjusted as well. However, Spring Data REST can hide certain data elements and can be configured flexibly so that the tight coupling between the internal model and the interface can be decoupled if necessary.

2. http://projects.spring.io/spring-data-rest/

Spring Boot

Spring Boot3 facilitates Spring further. Spring Boot makes the generation of a Spring system very easy: with Spring Boot starters predefined packages are available that contain everything that is necessary for a certain type of application. Spring Boot can generate WAR files, which can be installed on a Java application or web server. In addition, it is possible to run the application without an application or web server. The result of the build is a JAR file in that case, which can be run with a Java Runtime Environment (JRE). The JAR file contains everything for running the application and also the necessary code to deal with HTTP requests. This approach is by far less demanding and simpler than the use of an application server (https://jaxenter.com/java-application-servers-dead-112186.html).

3. http://projects.spring.io/spring-boot/

A simple example for a Spring Boot application is shown in Listing 13.1. The main program main hands control over to Spring Boot. The class is passed in as a parameter so that the application can be called. The annotation @SpringBootApplication makes sure that Spring Boot generates a suitable environment. For example, a web server is started, and an environment for a Spring web application is generated as the application is a web application. Because of @RestController the Spring Framework instantiates the class and calls methods for the processing of REST requests. @RequestMapping shows which method is supposed to handle which request. Upon request of the URL “/” the method hello() is called, which returns as result the sign chain “hello” in the HTTP body. In an @RequestMapping annotation, URL templates such as “/customer/{id}” can be used. Then a URL like “/customer/42” can be cut into separate parts and the 42 bound to a parameter annotated with @PathVariable. As dependency the application uses only spring-boot-starter-web pulling all necessary libraries for the application along—for instance the web server, the Spring Framework, and additional dependent classes. Section 13.3 will discuss this topic in more detail.

Listing 13.1 A simple Spring Boot REST Service

@RestController
@SpringBootApplication
public class ControllerAndMain {

 @RequestMapping("/")
 public String hello() {
  return "hello";
 }

 public static void main(String[] args) {
  SpringApplication.run(ControllerAndMain.class, args);
 }

}

Spring Cloud

Finally, the example application uses Spring Cloud4 to gain easy access to the Netflix Stack. Figure 13.2 shows an overview.

4. http://projects.spring.io/spring-cloud/

Image

Figure 13.2 Overview of Spring Cloud

Spring Cloud offers via the Spring Cloud Connectors access to the PaaS (platform as a service) Heroku and Cloud Foundry. Spring Cloud for Amazon Web Services offers an interface for services from the Amazon Cloud. This part of Spring Cloud is responsible for the name of the project but is not helpful for the implementation of microservices.

However, the other sub-projects of Spring Cloud provide a very good basis for the implementation of microservices:

Spring Cloud Security supports the implementation of security mechanisms as typically required for microservices, among those single sign on into a microservices environment. That way a user can use each of the microservices without having to log in anew every time. In addition, the user token is transferred automatically for all calls to other REST services to ensure that those calls can also work with the correct user rights.

Spring Cloud Config can be used to centralize and dynamically adjust the configuration of microservices. Section 11.4 already presented technologies, which configure microservices during deployment. To be able to reproduce the state of a server at any time, a new server should be started with a new microservice instance in case of a configuration change instead of dynamically adjusting an existing server. If a server is dynamically adjusted, there is no guarantee that new servers are generated with the right configuration as they are configured in a different way. Because of these disadvantages the example application refrains from using this technology.

Spring Cloud Bus can send dynamic configuration changes for Spring Cloud Config. Moreover, the microservices can communicate via Spring Cloud Bus. However, the example application does not use this technology because Spring Cloud Config is not used, and the microservices communicate via REST.

Spring Cloud Sleuth enables distributed tracing with tools like Zipkin or Htrace. It can also use a central log storage with ELK (see section 11.2).

Spring Cloud Zookeeper supports Apache Zookeeper (see section 7.10). This technology can be used to coordinate and configure distributed services.

Spring Cloud Consul facilitates Services Discovery using Consul (see section 7.11).

Spring Cloud Cluster implements leader election and stateful patterns using technologies like Zookeeper or Consul. It can also use the NoSQL data store Redis or the Hazelcast cache.

Spring Cloud for Cloud Foundry provides support for the Cloud Foundry PaaS. For example, single sign on (SSO) and OAuth2 protected resources are supported as well as creating managed service for the Cloud Foundry service broker.

Spring Cloud Connectors support access to services provided by PaaS like Heroku or Cloud Foundry.

Spring Cloud Data Flow helps with the implementation of applications and microservices for Big Data analysis.

Spring Cloud Tasks provides features for short lived microservices.

• Finally, Spring Cloud Stream supports messaging using Redis, Rabbit, or Kafka.

Spring Cloud Netflix

Spring Cloud Netflix offers simple access to Netflix Stack, which has been especially designed for the implementation of microservices. The following technologies are part of this stack:

Zuul can implement routing of requests to different services.

Ribbon serves as a load balancer.

Hystrix assists with implementing resilience in microservices.

Turbine can consolidate monitoring data from different Hystrix servers.

Feign is an option for an easier implementation of REST clients. It is not limited to microservices. It is not used in the example application.

Eureka can be used for Service Discovery.

These technologies are the ones that influence the implementation of the example application most.


Try and Experiment

For an introduction into Spring it is worthwhile to check out the Spring Guides at https://spring.io/guides/. They show in detail how Spring can be used to implement REST services or to realize messaging solutions via JMS. An introduction into Spring Boot can be found at https://spring.io/guides/gs/spring-boot/. Working your way through these guides provides you with the necessary know-how for understanding the additional examples in this chapter.


13.3 Build

The example project is built with the tool Maven.5 The installation of the tool is described at https://maven.apache.org/download.cgi. The command mvn package in the directory microservice/microservice-demo can be used to download all dependent libraries from the Internet and to compile the application.

5. http://maven.apache.org/

The configuration of the projects for Maven is saved in files named pom.xml. The example project has a Parent-POM in the directory microservice-demo. It contains the universal settings for all modules and in addition a list of the example project modules. Each microservice is such a module, and some infrastructure servers are modules as well. The individual modules have their own pom.xml, which contains the module name among other information. In addition, they contain the dependencies, i.e., the Java libraries they use.

Listing 13.2 Part of pom.xml Including Dependencies

...
<dependencies>

 <dependency>
    <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-eureka</artifactId>
 </dependency>

 <dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>
         spring-boot-starter-data-jpa
       </artifactId>
 </dependency>

Listing 13.2 shows a part of a pom.xml, which lists the dependencies of the module. Depending on the nature of the Spring Cloud features the project is using, additional entries have to be added in this part of the pom.xml usually with the groupId org.springframework.cloud.

The build process results in one JAR file per microservice, which contains the compiled code, the configuration, and all necessary libraries. Java can directly start such JAR files. Although the microservices can be accessed via HTTP, they do not have to be deployed on an application or web server. This part of the infrastructure is also contained in the JAR file.

As the projects are built with Maven, they can be imported into all usual Java IDEs (integrated development environment) for further development. IDEs simplify code changes tremendously.


Try and Experiment

• Download and compile the example:

Download the example provided at https://github.com/ewolff/microservice. Install Maven; see https://maven.apache.org/download.cgi. In the subdirectory microservices-demo execute the command mvn package. This will build the complete project.

• Create a continuous integration server for the project:

https://github.com/ewolff/user-registration-V2 is an example project for a continuous delivery project. This contains in subdirectory ci-setup a setup for a continuous integration server (Jenkins) with static code analysis (Sonarqube) and Artifactory for the handling of binary artifacts. Integrate the microservices project into this infrastructure so that a new build is triggered upon each change.

The next section (13.4) will discuss Vagrant in more detail. This tool is used for the continuous integration servers. It simplifies the generation of test environments greatly.


13.4 Deployment Using Docker

Deploying microservices is very easy:

• Java has to be installed on the server.

• The JAR file, which resulted from the build, has to be copied to the server.

• A separate configuration file application.properties can be created for further configurations. It is automatically read out by Spring Boot and can be used for additional configurations. An application.properties containing default values is comprised in the JAR file.

• Finally, a Java process has to start the application out of the JAR file.

Each microservice starts within its own Docker container. As discussed in section 11.7, Docker uses Linux containers. In this manner the microservice cannot interfere with processes in other Docker containers and has a completely independent file system. The Docker image is the basis for this file system. However, all Docker containers share the Linux kernel. This saves resources. In comparison to an operating system process a Docker container has virtually no additional overhead.

Listing 13.3 Dockerfile for a Microservice Used in the Example Application

FROM java
CMD /usr/bin/java -Xmx400m -Xms400m
  -jar /microservice-demo/microservice-demo-catalog
  /target/microservice-demo-catalog-0.0.1-SNAPSHOT.jar
EXPOSE 8080

A file called Dockerfile defines the composition of a Docker container. Listing 13.3 shows a Dockerfile for a microservice used in the example application:

FROM determines the base image used by the Docker container. A Dockerfile for the image java is contained in the example project. It generates a minimal Docker image with only a JVM installed.

CMD defines the command executed at the start of the Docker container. In the case of this example it is a simple command line. This line starts a Java application out of the JAR file generated by the build.

• Docker containers are able to communicate with the outside via network ports. EXPOSE determines which ports are accessible from outside. In the example the application receives HTTP requests via port 8080.

13.5 Vagrant

Docker runs exclusively under Linux, because it uses Linux containers. However, there are solutions for other operating systems, which start a virtual Linux machine and thus enable the use of Docker. This is largely transparent so that the use is practically identical to the use under Linux. But in addition all Docker containers need to be built and started.

To make installing and handling Docker as easy as possible, the example application uses Vagrant. Figure 13.3 shows how Vagrant works:

Image

Figure 13.3 How Vagrant Works

To configure Vagrant a single file is necessary, the Vagrantfile. Listing 13.4 shows the Vagrantfile of the example application:

Listing 13.4 Vagrantfile from the Example Application

Vagrant.configure("2") do |config|
  config.vm.box = " ubuntu/trusty64"
  config.vm.synced_folder ."./microservice-demo",
    "/microservice-demo", create: true
   config.vm.network "forwarded_port",
     guest: 8080, host: 18080
   config.vm.network "forwarded_port",
     guest: 8761, host: 18761
   config.vm.network "forwarded_port",
         guest: 8989, host: 18989

config.vm.provision "docker" do |d|
  d.build_image "--tag=java /vagrant/java"
  d.build_image "--tag=eureka /vagrant/eureka"
  d.build_image
        "--tag=customer-app /vagrant/customer-app"
  d.build_image "
        "--tag=catalog-app /vagrant/catalog-app"
  d.build_image "--tag=order-app /vagrant/order-app"
  d.build_image "--tag=turbine /vagrant/turbine"
  d.build_image "--tag=zuul /vagrant/zuul"
end
config.vm.provision "docker", run: "always" do |d|
  d.run "eureka",
    args: "-p 8761:8761"+
         "-v /microservice-demo:/microservice-demo"
  d.run "customer-app",
    args: "-v /microservice-demo:/microservice-demo"+
         "--link eureka:eureka"
  d.run "catalog-app",
    args: "-v /microservice-demo:/microservice-demo"+
         "--link eureka:eureka"
  d.run "order-app",
    args: "-v /microservice-demo:/microservice-demo"+
         "--link eureka:eureka"
  d.run "zuul",
    args: "-v /microservice-demo:/microservice-demo"+
         " -p 8080:8080 --link eureka:eureka"
  d.run "turbine",
   args: "-v /microservice-demo:/microservice-demo"+
         " --link eureka:eureka"
  end

end

config.vm.box selects a base image—in this case an Ubuntu-13.04 Linux installation (Trusty Tahr).

config.vm. synced_folder mounts the directory containing the results of the Maven build into the virtual machine. In this manner the Docker containers can directly make use of the build results.

• The ports of the virtual machine can be linked to the ports of the computer running the virtual machine. The config.vm.network settings can be used for that. In this manner applications in the Vagrant virtual machine become accessible as if running directly on the computer.

config.vm.provision starts the part of the configuration that deals with the software provisioning within the virtual machine. Docker serves as provisioning tool and is automatically installed within the virtual machine.

d.build_image generates the Docker images using Dockerfiles. First the base image java is created. Images for the three microservices customer-app, catalog-app and order-app follow. The images for the Netflix technologies servers belong to the infrastructure: Eureka for Service Discovery, Turbine for monitoring, and Zuul for routing of client requests.

• Vagrant starts the individual images using d.run. This step is not only performed when provisioning the virtual machine, but also when the system is started anew (run: "always"). The option –v mounts the directory /microservice-demo into each Docker container so that the Docker container can directly execute the compiled code. -p links a port of the Docker container to a port of virtual machine. This link provides access to the Docker container Eureka under the host name eureka from within the other Docker containers.

In the Vagrant setup the JAR files containing the application code are not contained in the Docker image. The directory /microservice-demo does not belong to the Docker container. It resides on the host running the Docker containers, that is, the Vagrant VM. It would also be possible to copy these files into the Docker image. Afterwards the resulting image could be copied on a repository server and downloaded from there. Then the Docker container would contain all necessary files to run the microservice. A deployment in production then only needs to start the Docker images on a production server. This approach is used in the Docker Machine setup (see section 13.6).

Networking in the Example Application

Figure 13.4 shows how the individual microservices of the example application communicate via the network. All Docker containers are accessible in the network via IP addresses from the 172.17.0.0/16 range. Docker generates such a network automatically and connects all Docker containers to the network. Within the network all ports are accessible that are defined in the Dockerfiles using EXPOSE. The Vagrant virtual machine is also connected to this network. Via the Docker links (see Listing 13.4) all Docker containers know the Eureka container and can access it under the host name eureka. The other microservices have to be looked up via Eureka. All further communication takes place via the IP address.

Image

Figure 13.4 Network and Ports of the Example Application

In addition, the -p option in the d.run entries for the Docker containers in Listing 13.4 has connected the ports to the Vagrant virtual machine. These containers can be accessed via these ports of the Vagrant virtual machine. To reach them also from the computer running the Vagrant virtual machine there is a port mapping that links the ports to the local computer. This is accomplished via the config.vm. network entries in Vagrantfile. The port 8080 of the Docker container “zuul” can, for instance, be accessed via the port 8080 in the Vagrant virtual machine. This port can be reached from the local computer via the port 18080. So the URL http://localhost:18080/ accesses this Docker container.


Try and Experiment

Run the Example Application

The example application does not need much effort to make it run. A running example application lays the foundation for the experiments described later in this chapter.

One remark: The Vagrantfile defines how much RAM and how many CPUs the virtual machines gets. The settings v.memory and v.cpus, which are not shown in the listing, deal with this. Depending on the computer used, the values should be increased if a lot of RAM or many CPUs are present. Whenever the values can be increased, they should be elevated in order to speed the application up.

The installation of Vagrant is described in https://www.vagrantup.com/docs/installation/index.html. Vagrant needs a virtualization solution like VirtualBox. The installation of VirtualBox is explained at https://www.virtualbox.org/wiki/Downloads. Both tools are free.

The example can only be started once the code has been compiled. Instructions how to compile the code can be found in the experiment described in section 13.3. Afterwards you can change into the directory docker-vagrant and start the example demo using the command vagrant up.

To interact with the different Docker containers, you have to log into the virtual machine via the command vagrant ssh. This command has to be executed within the subdirectory docker-vagrant. For this to be possible an ssh client has to be installed on the computer. On Linux and Mac OS X such a client is usually already present. In Windows installing git will bring an ssh client along as described at http://git-scm.com/download/win. Afterwards vagrant ssh should work.

Investigate Docker Containers

Docker contains several useful commands:

docker ps provides an overview of the running Docker containers.

The command docker log "name of Docker container" shows the logs.

docker log -f "name of Docker Container" provides incessantly the up-to-date log information of the container.

docker kill "name of the Docker Container" terminates a Docker container.

docker rm "name of the Docker Container" deletes all data. For that all containers first needs to be stopped. After starting the application, the log files of the individual Docker containers can be looked at.

Update Docker Containers

A Docker container can be terminated (docker kill) and the data of the container deleted (docker rm). The commands have to be executed inside the Vagrant virtual machine. vagrant provision starts the missing Docker containers again. This command has to be executed on the host running Vagrant. If you want to change the Docker container, simply delete it, compile the code again and generate the system anew using vagrant provision. Additional Vagrant commands include the following:

vagrant halt terminates the virtual machine.

vagrant up starts it again.

vagrant destroy destroys the virtual machine and all saved data.

Store Data on Disk

Right now the Docker container does not save the data so that it is lost upon restarting. The used HSQLDB database can also save the data into a file. To achieve that a suitable HSQLDB URL has to be used, see http://hsqldb.org/doc/guide/dbproperties-chapt.html#dpc_connection_url. Spring Boot can read the JDBC URL out of the application.properties file; see http://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-sql.html#boot-features-connect-to-production-database. Now the container can be restarted without data loss. But what happens if the Docker container has to be generated again? Docker can save data also outside of the container itself; compare https://docs.docker.com/userguide/dockervolumes/. These options provide a good basis for further experiments. Also another database than HSQLDB can be used, such as MySQL. For that purpose another Docker container has to be installed that contains the database. In addition to adjusting the JDBC URL, a JDBC driver has to be added to the project.

How is the Java Docker Image Built?

The Docker file is more complex than the ones discussed here. https://docs.docker.com/reference/builder/ demonstrates which commands are available in Dockerfiles. Try to understand the structure of the Dockerfiles.


13.6 Docker Machine

Vagrant serves to install environments on a developer laptop. In addition to Docker, Vagrant can use simple shell scripts for deployment. However, for production environments this solution is unsuitable. Docker Machine6 is specialized in Docker. It supports many more virtualization solutions as well as some cloud providers.

6. https://docs.docker.com/machine/

Figure 13.5 demonstrates how Docker Machine builds a Docker environment: First, using a virtualization solution like VirtualBox, a virtual machine is installed. This virtual machine is based on boot2docker, a very lightweight version of Linux designed specifically as a running environment for Docker containers. On that Docker Machine installs a current version of Docker. A command like docker-machine create --driver virtualbox dev generates, for instance, a new environment with the name dev running on a VirtualBox computer.

Image

Figure 13.5 Docker Machine

The Docker tool now can communicate with this environment. The Docker command line tools use a REST interface to communicate with the Docker server. Accordingly, the command line tool just has to be configured in a way that enables it to communicate with the server in a suitable manner. In Linux or Mac OS X, the command eval "$(docker-machine env dev)" is sufficient to configure the Docker appropriately. For Windows PowerShell, the command docker-machine.exe env --shell powershell dev must be used and in Windows cmd docker-machine.exe env --shell cmd dev.

Docker Machine thus renders it very easy to install one or several Docker environments. All the environments can be handled by Docker Machine and accessed by the Docker command line tool. As Docker Machine also supports technologies like Amazon Cloud or VMware vSphere, it can be used to generate production environments.


Try and Experiment

The example application can also run in an environment created by Docker Machine.

The installation of Docker Machine is described at https://docs.docker.com/machine/#installation. Docker Machine requires a virtualization solution like VirtualBox. How to install VirtualBox can be found at https://www.virtualbox.org/wiki/Downloads. Using docker-machine create --virtualbox-memory "4096" --driver virtualbox dev a Docker environment called dev can now be created on a Virtual Box. Without any further configuration the storage space is set to 1 GB, which is not sufficient for a larger number of Java Virtual Machines.

docker-machine without parameters displays a help text, and docker-machine create shows the options for the generation of a new environment. https://docs.docker.com/machine/get-started-cloud/ demonstrates how Docker Machine can be used in a Cloud. This means that the example application can also easily be started in a cloud environment.

At the end of your experiments, docker-machine rm deletes the environment.


13.7 Docker Compose

A microservice-based system comprises typically several Docker containers. These have to be generated together and need to be put into production simultaneously.

This can be achieved with Docker Compose.7 It enables the definition of Docker containers, which each house one service. YAML serves as format.

7. http://docs.docker.com/compose/

Listing 13.5 Docker Compose Configuration for the Example Application

version: '2'
services:
  eureka:
    build: ../microservice-demo/microservice-demo-eureka-server
    ports:
      - "8761:8761"
  customer:
    build: ../microservice-demo/microservice-demo-customer
    links:
     - eureka
  catalog:
    build: ../microservice-demo/microservice-demo-catalog
    links:
     - eureka
  order:
    build: ../microservice-demo/microservice-demo-order
    links:
     - eureka
  zuul:
    build: ../microservice-demo/microservice-demo-zuul-server
    links:
     - eureka
    ports:
      - "8080:8080"
  turbine:
    build: ../microservice-demo/microservice-demo-turbine-
server
    links:
     - eureka
    ports:
      - "8989:8989"

Listing 13.5 shows the configuration of the example application. It consists of the different services. build references the directory containing the Dockerfile. The Dockerfile is used to generate the image for the service. links defines which additional Docker containers the respective container should be able to access. All containers can access the Eureka container under the name eureka. In contrast to the Vagrant configuration there is no Java base image, which contains only a Java installation. This is because Docker Compose supports only containers that really offer a service. Therefore, this base image has to be downloaded from the Internet. Besides, in case of the Docker Compose containers the JAR files are copied into the Docker images so that the images contain everything for starting the microservices.

The resulting system is very similar to the Vagrant system (see Figure 13.6). The Docker containers are linked via their own private network. From the outside, only Zuul can be accessed for the processing of requests and Eureka for the dashboard. They are running directly on a host that then can be accessed from the outside.

Image

Figure 13.6 Network and Ports of the Example Application

Using docker-compose build the system is created based on the Docker Compose configuration. Thus the suitable images are generated. docker-compose up then starts the system. Docker Compose uses the same settings as the Docker command line tool so it can also work together with Docker Machine. Thus it is transparent whether the system is generated on a local virtual machine or somewhere in the Cloud.


Try and Experiment

Run the Example with Docker Compose

The example application possesses a suitable Docker Compose configuration. Upon the generation of an environment with Docker Machine, Docker Compose can be used to create the Docker containers. README.md in the directory docker describes the necessary procedure.

Scale the Application

Have a look at the docker-compose scale command. It can scale the environment. Services can be restarted and logs can be analyzed and finally stopped. Once you have started the application, you can test these functionalities.

Cluster Environments for Docker

Mesos (http://mesos.apache.org/) together with Mesosphere (http://mesosphere.com/), Kubernetes (http://kubernetes.io/), or CoreOS (http://coreos.com/) offers similar options as Docker Compose and Docker Machine. However they are meant for servers and server clusters. The Docker Compose and Docker Machine configurations can provide a good basis for running the application on these platforms.


13.8 Service Discovery

Section 7.11 introduced the general principles of Service Discovery. The example application uses Eureka8 for Service Discovery.

8. https://github.com/Netflix/Eureka

Eureka is a REST-based server, which enables services to register themselves so that other services can request their location in the network. In essence, each service can register a URL under its name. Other services can find the URL by the name of the service. Using this URL other services can then send REST messages to this service.

Eureka supports replication onto several servers and caches on the client. This makes the system fail-safe against the failure of individual Eureka servers and enables rapid answer requests. Changes to data have to be replicated to all servers. Accordingly, it can take some time until they are really updated everywhere. During this time the data is inconsistent: Each server has a different version of the data.

In addition, Eureka supports Amazon Web Services because Netflix uses it in this environment. Eureka can, for instance, quite easily be combined with Amazon’s scaling.

Eureka monitors the registered services and removes them from the server list if they cannot be reached anymore by the Eureka server.

Eureka is the basis for many other services of the Netflix Stack and for Spring Cloud. Through a uniform Service Discovery, other aspects such as routing can easily be implemented.

Eureka Client

For a Spring Boot application to be able to register with a Eureka server and to find other microservices, the application has to be annotated with @EnableDiscoveryClient or @EnableEurekaClient. In addition, a dependency from spring-cloud-starter-eureka has to be included in the file pom.xml. The application registers automatically with the Eureka server and can access other microservices. The example application accesses other microservices via a load balancer. This is described in detail in section 13.11.

Configuration

Configuring the application is necessary to define, for instance, the Eureka server to be used. The file application.properties (see Listing 13.6) is used for that. Spring Boot reads it out automatically in order to configure the application. This mechanism can also be used to configure one’s own code. In the example application the values serve to configure the Eureka client:

• The first line defines the Eureka server. The example application uses the Docker link, which provides the Eureka server under the host name “eureka.”

leaseRenewalIntervalInSeconds determines how often data is updated between client and server. As the data has to be held locally in a cache on each client, a new service first needs to create its own cache and replicate it onto the server. Afterwards the data is replicated onto the clients. Within a test environment it is important to track system changes rapidly so that the example application uses five seconds instead of the preset value of 30 seconds. In production with many clients, this value should be increased. Otherwise the updates of information will use a lot of resources, even though the information remains essentially unchanged.

spring.application.name serves as the name for the service during the registration at Eureka. During registration the name is converted into capital letters. This service would thus be known by Eureka under the name “CUSTOMER.”

• There can be several instances of each service to achieve fail over and load balancing. The instanceId has to be unique for each instance of a service. Because of that it contains a random number, which ensures unambiguousness.

preferIpAddress makes sure that microservices register with their IP addresses and not with their host names. Unfortunately in a Docker environment host names are not easily resolvable by other hosts. This problem is circumvented by the use of IP addresses.

Listing 13.6 Part of application.properties with Eureka Configuration

eureka.client.serviceUrl.defaultZone=http://eureka:8761/eureka/
eureka.instance.leaseRenewalIntervalInSeconds=5
spring.application.name=catalog
eureka.instance.metadataMap.instanceId=catalog:${random.value}
eureka.instance.preferIpAddress=true

Eureka Server

The Eureka server (Listing 13.7) is a simple Spring Boot application, which turns into a Eureka server via the @EnableEurekaServer annotation. In addition, the server needs a dependency on spring-cloud-starter-eureka-server.

Listing 13.7 Eureka Server

@EnableEurekaServer
@EnableAutoConfiguration
public class EurekaApplication {
  public static void main(String[] args) {
    SpringApplication.run(EurekaApplication.class, args);
  }
}

The Eureka server offers a dashboard that shows the registered services. In the example application, this can be found at http://localhost:18761/ (Vagrant) or on Docker host under port 8761 (Docker Compose). Figure 13.7 shows a screenshot of the Eureka dashboards for the example application. The three microservices and the Zuul-Proxy, which is discussed in the next section, are present on the dashboard.

Image

Figure 13.7 Eureka Dashboard

13.9 Communication

Chapter 8, “Integration and Communication,” explains how microservices communicate with each other and can be integrated. The example application uses REST for internal communication. The REST end points can be contacted from outside; however, the web interface the system offers is of far greater importance. The REST implementation uses HATEOAS. The list containing all orders, for instance, contains links to the individual orders. This is automatically implemented by Spring Data REST. However, there are no links to the customer and the items of the order.

Using HATEOAS can go further: the JSON can contain a link to an HTML document for each order—and vice versa. In this way a JSON-REST-based service can generate links to HTML pages to display or modify data. Such HTML code can, for instance, present an item in an order. As the “Catalog” team provides the HTML code for the item, the catalog team itself can introduce changes to the presentation—even if the items are displayed in another module.

REST is also of use here: HTML and JSON are really only representations of the same resource that can be addressed by a URL. Via Content Negotiation the right resource representation as JSON or HTML can be selected (see section 8.2).

Zuul: Routing

The Zuul9 proxy transfers incoming requests to the respective microservices. The Zuul proxy is a separate Java process. To the outside only one URL is visible; however, internally the calls are processed by different microservices. This enables the system to internally change the structure of the microservices while still offering a URL to the outside. In addition, Zuul can provide web resources. In the example in Figure 13.8, Zuul provides the first HTML page viewed by the user.

9. https://github.com/Netflix/zuul

Image

Figure 13.8 Zuul Proxy in the Example Application

Zuul needs to know which requests to transfer to which microservice. Without additional configuration Eureka will forward a request to a URL starting with “/customer” to the microservice called CUSTOMER. This renders the internal microservice names visible to the outside. However, this routing can also be configured differently. Moreover, Zuul filters can change the requests in order to implement general aspects in the system. There is, for instance, an integration with Spring Cloud Security to pass on security tokens to the microservices. Such filters can also be used to pass on certain requests to specific servers. This makes it possible, for instance, to transfer requests to servers having additional analysis options for investigating error situations. In addition, a part of a microservice functionality can be replaced by another microservice.

Implementing the Zuul proxy server with Spring Cloud is very easy and analogous to the Eureka server presented in Listing 13.7. Instead of @EnableEurekaServer it is @EnableZuulProxy, which activates the Zuul-Proxy. As an additional dependency, spring-cloud-starter-zuul has to be added to the application, for instance, within the Maven build configuration, which then integrates the remaining dependencies of Zuul into the application.

A Zuul server represents an alternative to a Zuul proxy. It does not have routing built in, but uses filters instead. A Zuul server is activated by @EnableZuulServer.


Try and Experiment

Add Links to Customer and Items

Extend the application so that an order contains also links to the customer and to the items and thus implements HATEOAS better. Supplement the JSON documents for customers, items, and orders with links to the forms.

Use the “Catalog” Service to Show Items in Orders

Change the order presentation so that HTML from the “Catalog” service is used for items. To do so, you have to insert suitable JavaScript code into the order component, which loads HTML code from the “Catalog.”

Implement Zuul Filters

Implement your own Zuul filter (see https://github.com/Netflix/zuul/wiki/Writing-Filters). The filter can, for instance, only release the requests. Introduce an additional routing to an external URL. For instance, /google could redirect to http://google.com/. Compare the Spring Cloud reference documentation.10

10. http://projects.spring.io/spring-cloud/docs/1.0.3/spring-cloud.html

Authentication and Authorization

Insert an authentication and authorization with Spring Cloud Security. Compare http://cloud.spring.io/spring-cloud-security/.


13.10 Resilience

Resilience means that microservices can deal with the failure of other microservices. Even if a called microservice is not available, they will still work. Section 9.5 presented this topic.

The example application implements resilience with Hystrix.11 This library protects calls so that no problems arise if a system fails. When a call is protected by Hystrix, it is executed in a different thread than the call itself. This thread is taken from a distinct thread pool. This makes it comparatively easy to implement a timeout during a call.

11. https://github.com/Netflix/Hystrix/

Circuit Breaker

In addition, Hystrix implements a Circuit Breaker. If a call causes an error, the Circuit Breaker will open after a certain number of errors. In that case subsequent calls are not directed to the called system anymore, but generate an error immediately. After a sleep window the Circuit Breaker closes so that calls are directed to the actual system again. The exact behavior can be configured.12 In the configuration the error threshold percentage can be determined. That is the percentage of calls that have to cause an error within the time window for the circuit breaker to open. Also the sleep window can be defined, in which the Circuit Breaker is open and not sending calls to the system.

12. https://github.com/Netflix/Hystrix/wiki/Configuration

Hystrix with Annotations

Spring Cloud uses Java annotations from the project hystrix-javanica for the configuration of Hystrix. This project is part of hystrix-contrib.13 The annotated methods are protected according to the setting in the annotation. Without this approach Hystrix commands would have to be written, which is a lot more effort than just adding some annotations to a Java method.

13. https://github.com/Netflix/Hystrix/tree/master/hystrix-contrib

To be able to use Hystrix within a Spring Cloud application, the application has to be annotated with @EnableCircuitBreaker respectively @EnableHystrix. Moreover, the project needs to contain a dependency to spring-cloud-starter-hystrix.

Listing 13.8 shows a section from the class CatalogClient of the “Order” microservice from the example application. The method findAll() is annotated with @HystrixCommand. This activates the processing in a different thread and the Circuit Breaker. The Circuit Breaker can be configured—in the example the number of calls, which have to cause an error in order to open the Circuit Breaker, is set to 2. In addition, the example defines a fallbackMethod. Hystrix calls this method if the original method generates an error. The logic in findAll() saves the last result in a cache, which is returned by the fallbackMethod without calling the real system. In this way a reply can still be returned when the called microservice fails, however this reply might no longer be up-to-date.

Listing 13.8 Example for a Method Protected by Hystrix

@HystrixCommand(
 fallbackMethod = "getItemsCache",
 commandProperties = {
 @HystrixProperty(

name = "circuitBreaker.requestVolumeThreshold", value = "2") })
public Collection findAll() {
  this.itemsCache = ...
  ...
  return pagedResources.getContent();
}

private Collection getItemsCache() {
  return itemsCache;
}

Monitoring with the Hystrix Dashboard

Whether a Circuit Breaker is currently open or closed gives an indication of how well a system is running. Hystrix offers data to monitor this. A Hystrix system provides such data as a stream of JSON documents via HTTP. The Hystrix Dashboard can visualize the data in a web interface. The dashboard presents all Circuit Breakers along with the number of requests and their state (open/closed) (see Figure 13.9). In addition, it displays the state of the thread pools.

Image

Figure 13.9 Example for a Hystrix Dashboard

A Spring Boot Application needs to have the annotation @EnableHystrixDashboard and a dependency to spring-cloud-starter-hystrix-dashboard to be able to display a Hystrix Dashboard. That way any Spring Boot application might in addition show a Hystrix Dashboard, or the dashboard can be implemented in an application by itself.

Turbine

In a complex microservices environment it is not useful that each instance of a microservice visualizes the information concerning the state of its Hystrix Circuit Breaker. The state of all Circuit Breakers in the entire system should be summarized on a single dashboard. To visualize the data of the different Hystrix systems on one dashboard, there is the Turbine project. Figure 13.10 illustrates the approach Turbine takes: the different streams of the Hystrix enabled microservices are provided at URLs like http://<host:port>/hystrix.stream. The Turbine server requests them and provides them in a consolidated manner at the URL http://<host:port>/turbine.stream. This URL can be used by the dashboard in order to display the information of all Circuit Breakers of the different microservice instances.

Image

Figure 13.10 Turbine Consolidates Hystrix Monitoring Data

Turbine runs in a separate process. With Spring Boot the Turbine server is a simple application, which is annotated with @EnableTurbine and @EnableEurekaClient. In the example application it has the additional annotation @EnableHystrixDashboard so that it also displays the Hystrix Dashboard. It also needs a dependency on spring-cloud-starter-turbine.

Which data is consolidated by the Turbine server is determined by the configuration of the application. Listing 13.9 shows the configuration of the Turbine servers of the example project. It serves as a configuration for a Spring Boot application just like application.properties files but is written in YAML. The configuration sets the value ORDER for turbine.aggregator.clusterConfig. This is the application name in Eureka. turbine.aggregator.appConfig is the name of the data stream in the Turbine server. In the Hystrix Dashboard a URL like http://172.17.0.10:8989/turbine.stream?cluster=ORDER has to be used in visualize the data stream. Part of the URL is the IP address of the Turbine server, which can be found in the Eureka Dashboard. The dashboard accesses the Turbine server via the network between the Docker containers.

Listing 13.9 Configuration application.yml

turbine:
 aggregator:
  clusterConfig: ORDER
 appConfig: order


Try and Experiment

Terminate Microservices

Using the example application generate a number of orders. Find the name of the “Catalog” Docker container using docke ps. Stop the “Catalog” Docker container with docker kill. This use is protected by Hystrix.

What happens? What happens if the “Customer” Docker container is terminated as well? The use of this microservice is not protected by Hystrix.

Add Hystrix to “Customer” Microservice

Protect the use of the “Customer” Docker container with Hystrix also. In order to do so change the class CustomerClient from the “Order” project. CatalogClient can serve as a template.

Change Hystrix Configuration

Change the configuration of Hystrix for the “Catalog” microservice. There are several configuration options.14 Listing 13.8 (CatalogClient from the “Order” Project) shows the use of the Hystrix annotations. Other time intervals for opening and closing of the circuit breakers are, for instance, a possible change.

14. https://github.com/Netflix/Hystrix/wiki/Configuration


13.11 Load Balancing

For Load Balancing the example application uses Ribbon.15 Many load balancers are proxy based. In this model the clients send all calls to a Load Balancer. The Load Balancer runs as a distinct server and forwards the request to a web server—often depending on the current load of the web servers.

15. https://github.com/Netflix/ribbon/wiki

Ribbon implements a different model called client-side load balancing: The client has all the information to communicate with the right server. The client calls the server directly and distributes the load by itself to different servers. In the architecture there is no bottleneck as there is no central server all calls would have to pass. In conjunction with data replication by Eureka, Ribbon is quite resilient: As long as the client runs, it can send requests. The failure of a proxy load balancer would stop all calls to the server.

Dynamic scaling is very simple within this system: A new instance is started, enlists itself at Eureka, and then the Ribbon Clients redirect load to the instance.

As already discussed in the section dealing with Eureka (section 13.8), data can be inconsistent over the different servers. Because data is not up to date, servers can be contacted, which really should be left out by the Load Balancing.

Ribbon with Spring Cloud

Spring Cloud simplifies the use of Ribbon. The application has to be annotated with @RibbonClient. While doing so, a name for the application can be defined. In addition, the application needs to have a dependency on spring-cloud-starter-ribbon. In that case an instance of a microservice can be accessed using code like that in Listing 13.10. For that purpose, the code uses the Eureka name of the microservice.

Listing 13.10 Determining a Server with Ribbon Load Balancing

ServiceInstance instance
 = loadBalancer.choose("CATALOG");
String url = "http://" + instance.getHost() + ":" +
 instance.getPort() + "/catalog/";

The use can also be transparent to a large extent. To illustrate this Listing 13.11 shows the use of RestTemplates with Ribbon. This is a Spring class, which can be used to call REST services. In the Listing the RestTemplate of Spring is injected into the object as it is annotated with @Autowired. The call in callMicroservice() looks like it is contacting a server called “stores.” In reality this name is used to search a server at Eureka, and the REST call is sent to this server. This is done via Ribbon so that the load is also distributed across the available servers.

Listing 13.11 Using Ribbon with RestTemplate

@RibbonClient(name = "ribbonApp")
 ... // Left out other Spring Cloud / Boot Annotations
public class RibbonApp {

 @Autowired
   private RestTemplate restTemplate;

   public void callMicroservice() {
     Store store = restTemplate.
      getForObject("http://stores/store/1", Store.class);
   }
}


Try and Experiment

Load Balance to an Additional Service Instance

The “Order” microservice distributes the load onto several instances of the “Customer and Catalog” microservice—if several instances exist. Without further measures, only a single instance is started. The “Order” microservice shows in the log which “Catalog” or “Customer” microservice it contacts. Initiate an order and observe which services are contacted.

Afterwards start an additional “Catalog” microservice. You can do that using the command: docker run -v /microservice-demo:/microservice-demo --link eureka:eureka catalog-app in Vagrant. For Docker Compose docker-compose scale catalog=2 should be enough. Verify whether the container is running and observe the log output.

For reference: “Try and Experiment” in section 13.4 shows the main commands for using Docker. Section 13.7 shows how to use Docker Compose.

Create Data

Create a new dataset with a new item. Is the item always displayed in the selection of items? Hint: The database runs within the process of the microservice—that is, each microservice instance possesses its own database.


13.12 Integrating Other Technologies

Spring Cloud and the entire Netflix Stack are based on Java. Thus, it seems impossible for other programming languages and platforms to use this infrastructure. However, there is a solution: the application can be supplied with a sidecar. The sidecar is written in Java and uses Java libraries to integrate into a Netflix-based infrastructure. The sidecar, for instance, takes care of registration and finding other microservices in Eureka. Netflix itself offers for this purpose the Prana project.16 The Spring Cloud solution is explained in the documentation.17 The sidecar runs in a distinct process and serves as an interface between the microservice itself and the microservice infrastructure. In this manner other programming languages and platforms can be easily integrated into a Netflix or Spring Cloud environment.

16. http://github.com/Netflix/Prana/

17. http://cloud.spring.io/spring-cloud-static/Brixton.SR5/#_polyglot_support_with_sidecar

13.13 Tests

The example application contains test applications for the developers of microservices. These do not need a microservice infrastructure or additional microservices—in contrast to the production system. This enables developers to run each microservice without a complex infrastructure.

The class OrderTestApp in the “Order” project contains such a test application. The applications contain their own configuration file application-test.properties with specific settings within the directory src/test/resources. The settings prevent that the applications register with the Service Discovery Eureka. Besides, they contain different URLs for the dependent microservices. This configuration is automatically used by the test application as it uses a Spring profile called “test.” All JUnit tests use these settings as well so that they can run without dependent services.

Stubs

The URLs for the dependent microservices in the test application and the JUnit tests point to stubs. These are simplified microservices, which only offer a part of the functionalities. They run within the same Java process as the real microservices or JUnit tests. Therefore, only a single Java process has to be started for the development of a microservice, analogous to the usual way of developing with Java. The stubs can be implemented differently—for instance, using a different programming language or even a web server, which returns certain static documents representing the test data (see section 10.6). Such approaches might be better suited for real-life applications.

Stubs facilitate development. If each developer needs to use a complete environment including all microservices during development, a tremendous amount of hardware resources and a lot of effort to keep the environment continuously up to date would be necessary. The stubs circumvent this problem as no dependent microservices are needed during development. Due to the stubs the effort to start a microservice is hardly bigger than the one for a regular Java application.

In a real project the teams can implement stubs together with the real micro-services. The “Customer” team can implement a stub for the “Customer” micro-service in addition to the real service, which is used by the other microservices for development. This ensures that the stub largely resembles the microservice and is updated if the original service is changed. The stub can be taken care of in a different Maven projects, which can be used by the other teams.

Consumer-Driven Contract Test

It has to be ensured that the stubs behave like the microservices they simulate. In addition, a microservice has to define the expectations regarding the interface of a different microservice. This is achieved by consumer-driven contract tests (see section 10.7). These are written by the team that uses the microservices. In the example this is the team that is responsible for the “Order” microservice. In the “Order” micro-service the consumer-driven contract tests are found in the classes CatalogConsumerDrivenContractTest and CustomerConsumerDrivenContractTest. They run there to test the stubs of the “Customer and Catalog” microservice for correctness.

Even more important than the correct functioning of the stubs is the correct functioning of the microservices themselves. For that reason, the consumer-driven contract tests are also contained in the “Customer and Catalog” project. There they run against the implemented microservices. This ensures that the stubs as well as the real microservices are in line with this specification. In case the interface is supposed to be changed, these tests can be used to confirm that the change does not break the calling microservice. It is up to the used microservices—“Customer and Catalog” in the example—to comply with these tests. In this manner the requirements of the “Order” microservice in regard to the “Customer and Catalog” microservice can be formally defined and tested. The consumer-driven contract tests serve in the end as formal definition of the agreed interface.

In the example application the consumer-driven contract tests are part of the “Customer and Catalog” projects in order to verify that the interface is correctly implemented. Besides they are part of the “Order” project for verifying the correct functioning of the stubs. In a real project copying the tests should be prevented. The consumer-driven contract tests can be located in one project together with the tested microservices. Then all teams need to have access to the microservice projects to be able to alter the tests. Alternatively, they are located within the projects of the different teams that are using the microservice. In that case the tested microservice has to collect the tests from the other projects and execute them.

In a real project it is not really necessary to protect stubs by consumer-driven contract tests, especially as it is the purpose of the stubs to offer an easier implementation than the real microservices. Thus the functionalities will be different and conflict with consumer-driven contract tests.


Try and Experiment

• Insert a field into “Catalog” or “Customer” data. Is the system still working? Why?

• Delete a field in the implementation of the server for “Catalog” or “Customer.” Where is the problem noticed? Why?

• Replace the home-grown stubs with stubs, that use a tool from Section 10.6.

• Replace the consumer-driven contract tests with tests that use a tool from Section 10.7.


13.14 Experiences with JVM-Based Microservices in the Amazon Cloud (Sascha Möllering)

By Sascha Möllering, zanox AG

During the last months zanox has implemented a lightweight microservices architecture in Amazon Web Services (AWS), which runs in several AWS regions. Regions divide the Amazon Cloud into sections like US-East or EU-West, which each have their own data centers. They work completely independently of each other and do not exchange any data directly. Different AWS regions are used because latency is very important for this type of application and is minimized by latency-based routing. In addition, it was a fundamental aim to design the architecture in an event-driven manner. Furthermore, the individual services were intended not to communicate directly but rather to be separated by message queues respectively bus systems. An Apache Kafka cluster as message bus in the zanox data center serves as central point of synchronization for the different regions. Each service is implemented as a stateless application. The state is stored in external systems like the bus systems, Amazon ElastiCache (based on the NoSQL database Redis), the data stream processing technology Amazon Kinesis, and the NoSQL database Amazon DynamoDB. The JVM serves as basis for the implementation of the individual services. We chose Vert.x and the embedded web server Jetty as frameworks. We developed all applications as self-contained services so that a Fat JAR, which can easily be started via java –jar, is generated at the end of the build process.

There is no need to install any additional components or an application server. Vert.x serves as basis framework for the HTTP part of the architecture. Within the application work is performed almost completely asynchronously to achieve high performance. For the remaining components we use Jetty as framework: These act either as Kafka/Kinesis consumer or update the Redis cache for the HTTP layer. All called applications are delivered in Docker containers. This enables the use of a uniform deployment mechanism independent of the utilized technology. To be able to deliver the services independently in the different regions, an individual Docker Registry storing the Docker images in a S3 bucket was implemented in each region. S3 is a service that enables the storage of large file on Amazon server.

If you intend to use Cloud Services, you have to address the question of whether you want to use the managed services of a cloud provider or develop and run the infrastructure yourself. zanox decided to use the managed services of a cloud provider because building and administrating proprietary infrastructure modules does not provide any business value. The EC2 computers of the Amazon portfolio are pure infrastructure. IAM, on the other hand, offers comprehensive security mechanisms. In the deployed services the AWS Java SDK is used, which enables it, in combination with IAM roles for EC2,18 to generate applications that are able to access the managed services of AWS without using explicit credentials. During initial bootstrapping an IAM role containing the necessary permissions is assigned to an EC2 instance. Via the Metadata Service19 the AWS SDK is given the necessary credentials. This enables the application to access the managed services defined in the role. Thus, an application can be that sends metrics to the monitoring system Amazon Cloud Watch and events to the data streaming processing solution Amazon Kinesis without having to roll out explicit credentials together with the application.

18. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

19. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

All applications are equipped with REST interfaces for heartbeats and health checks so that the application itself as well as the infrastructure necessary for the availability of the application can be monitored at all times: Each application uses health checks to monitor the infrastructure components it uses. Application scaling is implemented via Elastic Load Balancing (ELB) and AutoScaling20 to be able to achieve a fine-grained application depending on the concrete load. AutoScaling starts additional EC2 instances if needed. ELB distributes the load between the instances. The AWS ELB service is not only suitable for web applications working with HTTP protocols but for all types of applications. A health check can also be implemented based on a TCP protocol without HTTP. This is even simpler than an HTTP healthcheck.

20. https://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-add-elb-healthcheck.html

Still the developer team decided to implement the ELB healthchecks via HTTP for all services to achieve the goal that they all behave exactly the same, independent of the implemented logic, the used frameworks, and the language. It is also quite possible that in the future applications that do not run on JVM and, for instance, use Go or Python as programming languages, are deployed in AWS.

For the ELB healthcheck zanox uses the application heartbeat URL. As a result, traffic is only directed to the application respectively potentially necessary infrastructure scaling operations are only performed once the EC2 instance with the application has properly been started and the heartbeat was successfully monitored.

For application monitoring Amazon CloudWatch is a good choice as CloudWatch alarms can be used to define scaling events for the AutoScaling Policies, that is, the infrastructure scales automatically based on metrics. For this purpose, EC2 basis metrics like CPU can be used, for instance. Alternatively, it is possible to send your own metrics to CloudWatch. For this purpose, this project uses a fork of the project jmxtrans-agent,21 which uses the CloudWatch API to send JMX metrics to the monitoring system. JMX (Java Management Extension) is the standard for monitoring and metrics in the Java world. Besides metrics are sent from within the application (i.e., from within the business logic) using the library Coda Hale Metrics22 and a module for the CloudWatch integration by Blacklocus.23

21. https://github.com/SaschaMoellering/jmxtrans-agent

22. https://dropwizard.github.io/metrics/

23. https://github.com/blacklocus/metrics-cloudwatch

A slightly different approach is chosen for the logging: In a cloud environment it is never possible to rule out that a server instance is abruptly terminated. This often causes the sudden loss of data that are stored on the server. Log files are an example for that. For this reason, a logstash-forwarder24 runs in parallel to the core application on the server for sending the log entries to our ELK-Service running in our own data center. This stack consists of Elasticsearch for storage, Logstash for parsing the log data, and Kibana for UI-based analysis. ELK is an acronym for Elasticsearch, Logstash, und Kibana. In addition, a UUID is calculated for each request respectively each event in our HTTP layer so that log entries can still be assigned to events after EC2 instances have ceased to exist.

24. https://github.com/elastic/logstash-forwarder

Conclusion

The pattern of microservices architectures fits well to the dynamic approach of Amazon Cloud if the architecture is well designed and implemented. The clear advantage over implementing in your own data center is the infrastructure flexibility. This makes it possible to implement a nearly endlessly scalable architecture, which is, in addition, very cost efficient.

13.15 Conclusion

The technologies used in the example provide a very good foundation for implementing a microservices architecture with Java. Essentially, the example is based on the Netflix Stack, which has demonstrated its efficacy for years already in one of the largest websites.

The example demonstrates the interplay of different technologies for Service Discovery, Load Balancing, and resilience—as well as an approach for testing micro-services and for their execution in Docker containers. The example is not meant to be directly useable in a production context but is first of all designed to be very easy to set up and get running. This entails a number of compromises. However, the example serves very well as the foundation for further experiments and the testing of ideas.

In addition, the example demonstrates a Docker-based application deployment, which is a good foundation for microservices.

Essential Points

• Spring, Spring Boot, Spring Cloud, and the Netflix Stack offer a well-integrated stack for Java-based microservices. These technologies solve all typical challenges posed during the development of microservices.

• Docker-based deployment is easy to implement, and in conjunction with Docker Machine and Docker Compose, can be used for deployment in the Cloud, too.

• The example application shows how to test microservices using consumer-driven contract tests and stubs without special tools. However, for real-life projects tools might be more useful.


Try and Experiment

Add Log Analysis

The log analysis of all log files is important for running a microservice system. At https://github.com/ewolff/user-registration-V2 an example project is provided. The subdirectory log-analysis contains a setup for an ELK (Elasticsearch, Logstash und Kibana) stack-based log analysis. Use this approach to add a log analysis to the microservice example.

Add Monitoring

In addition, the example project from the continuous delivery book contains graphite an installation of Graphite for monitoring in the subdirectory. Adapt this installation for the microservice example.

Rewrite a Service

Rewrite one of the services in a different programming language. Use the consumer-driven contract tests (see sections 13.13 and 10.7) to protect the implementation. Make use of a sidecar for the integration into the technology stack (see section 13.12).


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset