© Kasun Indrasiri and Prabath Siriwardena 2018
Kasun Indrasiri and Prabath SiriwardenaMicroservices for the Enterprisehttps://doi.org/10.1007/978-1-4842-3858-5_4

4. Developing Services

Kasun Indrasiri1  and Prabath Siriwardena1
(1)
San Jose, CA, USA
 

Netflix has taken three steps to build an anti-fragile organization1 . The first step is to treat every developer as an operator of the corresponding service. The second is to treat each failure as an opportunity to learn and the third is to foster a blameless culture. These three little steps have helped Netflix become the top-leading organization in the microservices world. Many look to Netflix to learn how things are being done and for best practices. Netflix in fact optimized everything for the speed of delivery. As we already discussed in the book, at its core of any microservices design, time to production, scalability, complexity localization, and resiliency are key elements. Developer experience is one of the most important factors to reach these goals. The experience of a developer in a microservices environment is quite different from engineering a monolithic application. As rightly identified by Netflix, the developer’s task does not end after pushing the code into a source repository and expecting a DevOps engineer to build and push the changes to production, or a support engineer to take on any issues that happen in the production environment. A microservices development environment demands a broader skillset from the developer.

Note

All Netflix developers are given SSH keys to access their production servers, where in most of the traditional application deployments, developers have no access from staging onward.

Over the past few years there have been many tools and frameworks developed to assist and speed up microservices development. Most of the popular tools and frameworks are open source. In this chapter, we discuss different options available for a microservices developer and see how to build a microservice from the ground up.

Developer Tooling and Frameworks

There are multiple tools and frameworks available for microservice developers under different categories. There are a few key elements we believe one should take into consideration before picking the right microservices framework. One of the very basic requirements is to have good support for RESTful services. In the Java world, most developers look for the level of support for JAX-RS2. Most of the Java-based frameworks do support JAX-RS based annotations, but to extend that functionality, they have introduced their own.

The seventh factor of the 12-factor app (which we discussed in Chapter 2, “Designing Microservices”) says that your application has to do the port binding and expose itself as a service, without relying upon a third-party application server. This is another common requirement for any microservices framework. Without relying on any heavyweight application servers, which eat resources and take seconds (or minutes) to boot up, one should be able to spin up a microservice in a self-contained manner in just a few milliseconds.

Most of the microservice deployments follow the service per host model. In other words, only one microservice is deployed in one host, wherein most cases, the host is a container. It’s a key aspect that everyone looks for in a microservice framework to be container friendly. What container friendly (or container native) means is discussed in detail in Chapter 8, “Deploying and Running Microservices”. The level of security support that the language provides is another key differentiator in picking a microservice framework. There are multiple ways to secure a microservice and provide the service-to-service communication between microservices. We discuss microservices security in Chapter 11, “Microservices Security Fundamentals”.

The first class language support for telemetry and observability is another important aspect in a microservice framework. It’s extremely useful to track the health of the production servers and to identify any issues. We discuss observability in Chapter 13, “Observability”. Another two important aspects of a microservice framework is its support for transactions and asynchronous messaging. Asynchronous messaging was discussed in Chapter 3, “Inter-Service Communication,” and in Chapter 5, “Data Management,” we discuss how transactions are handled in a microservices environment.

The following sections explain the most popular microservice frameworks and tools. Later in the chapter, we go through a set of examples developed with Spring Boot, which is the most popular microservice development framework for Java developers.

Netflix OSS

As we already discussed in this chapter, Netflix is playing a leading role in the microservices domain, and its influence on making microservices mainstream and widely adopted is significant. The beauty of Netflix is its commitment toward open source. The Netflix Open Source Software3 (OSS) initiative has around 40 open source projects under different categories (see Figure 4-1). The following sections explain some of the common tools under Netflix OSS that are related to microservices development.

Nebula

Nebula4 is a collection of Gradle plugins built for Netflix engineers to eliminate boilerplate build logic and provide sane conventions. The goal of Nebula is to simplify common build, release, testing, and packaging tasks needed for projects at Netflix. They picked Gradle over Maven as the build tool, as they believe it’s the best out there for Java applications.

Spinnaker

Spinnaker5 is a multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It combines a powerful and flexible pipeline management system with integration with the major cloud providers, which include AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, and OpenStack.

Eureka

Eureka6 is a RESTful service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. In a microservices deployment, Eureka can be used as a service registry to discover endpoints.

Archaius

Archaius7 includes a set of configuration management APIs used by Netflix. It allows configurations to change dynamically at runtime, which enables production systems to get configuration changes without having to restart. To provide fast and thread-safe access to the properties, Archaius adds a cache layer that contains desired properties on top of the configuration. It also creates a hierarchy of configurations and determines the final property value in a simple, fast, and thread-safe manner.
../images/461146_1_En_4_Chapter/461146_1_En_4_Fig1_HTML.jpg
Figure 4-1

Netflix OSS deployment

Ribbon

Ribbon8 is an Inter-Process Communication (remote procedure calls) library with a built-in software load balancer (which is mostly used for client-side load balancing). Primarily it is used for REST calls with various serialization scheme support.

Hystrix

Hystrix9 is a latency and fault-tolerance library designed to isolate points of access to remote systems, services, and third-party libraries, stop cascading failure, and enable resilience in complex distributed systems where failure is inevitable. Hystrix implements many resiliency patterns discussed in Chapter 2. It uses the bulkhead pattern to isolate dependencies from each other and to limit concurrent access to any one of them, and the circuit breaker pattern to proactively avoid talking to faulty endpoints. Hystrix reports successes, failures, rejections, and timeouts to the circuit breaker, which maintains a rolling set of counters that calculate statistics. It uses these stats to determine when the circuit should “trip,” at which point it short-circuits any subsequent requests until a recovery period elapses, upon which it closes the circuit again after doing certain health checks10.

Zuul

Zuul11 is a gateway service that provides dynamic routing, monitoring, resiliency, security, and more. It acts as the front door to Netflix’s server infrastructure, handling traffic from all Netflix users around the world. It also routes requests, supports developer testing and debugging, provides deep insight into Netflix’s overall service health, protects it from attacks, and channels traffic to other cloud regions when an AWS region is in trouble.

Spring Boot

Spring Boot12 is the most popular microservices development framework for Java developers. To be precise, Spring Boot offers an opinionated13 runtime for Spring, which takes out a lot of the complexities. Even though Spring Boot is opinionated, it also allows developers to override many of its defaults picks. Due to the fact that many Java developers are familiar with Spring, and the ease of development is a key success factor in the microservices world, and many adopted Spring Boot. Even for Java developers who are not using Spring, it is a household name. If you have worked on Spring, surely would have worried about how painful it was to deal with large, chunky XML configuration files. Unlike Spring, Spring Boot thoroughly believes in convention over configuration—no more XML hell!

Note

Convention over configuration (also known as coding by convention) is a software design paradigm used by software frameworks that attempts to decrease the number of decisions that a developer using the framework is required to make without necessarily losing flexibility.14

Spring Cloud15 came after Spring Boot in March 2015. It provides tools for developers to quickly build some of the common patterns in distributed systems. Spring Cloud, along with Spring Boot, provides a great development environment for microservices developers. Another feature of Spring Cloud is that it provides Netflix OSS integrations for Spring Boot apps through auto-configuration and binding to the Spring environment and other Spring programming model idioms. With a few simple annotations, you can enable and configure the common patterns inside your application and build large distributed systems with Netflix OSS components. The patterns provided include service discovery with Eureka, circuit breaker with Hystrix, intelligent routing with Zuul, and client-side load balancing with Ribbon. We’ll use Spring Boot and Spring Cloud for the code examples we discuss later in this chapter.

Note

Convention over configuration was first introduced by David Heinemeier Hansson to describe the philosophy of the Ruby on Rails web framework. Apache Maven, the popular build automation tool, follows the same philosophy.

Istio

Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. It supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code. With a strong backing from Google, IBM, Lyft, and many others, Istio is one of the leading service mesh products in the microservices domain. We’ll be discussing service mesh in detail in Chapter 9, “Service Mesh”. For the time being, service mesh is a component in microservice architecture, which facilitates service-to-service communication along with routing rules, adhering to the resiliency patterns we discussed in Chapter 2, such as retries, timeouts, circuit breaker, and bulkheads. It also does performance monitoring and tracing . In most cases, the service mesh acts as a sidecar (Chapter 2), which will take the responsibility of handling the crosscutting features from the core microservice implementation. More details about Istio are covered in Chapter 9.

Dropwizard

Dropwizard is a widely used framework for developing microservices in Java. It’s well known as a little bit of opinionated glue code, which bangs together a set of libraries. Dropwizard includes Jetty to serve HTTP requests. Jetty16 provides a web server and javax.servlet container, plus support for HTTP/2, WebSockets, OSGi, JMX, JNDI, JAAS,17 and many other integrations. Support for REST and JAX-RS (JSR 311 and JSR 339) in Dropwizard is brought in with Jersey. Jersey18 is the JAX-RS reference implementation. It also integrates Jackson19 for JSON parsing and building, and Logback20 for logging. Logback is a successor to the popular log4j project.

Support for metrics is a key feature in any microservice framework, and Dropwizard embeds Metrics21 Java library to gather telemetric data related to a running application. Dropwizard also embeds Liquibase22, which is an open source database-independent library for tracking, managing, and applying database schema changes. It allows easier tracking of database changes, especially in an Agile software development environment. The database integration with Dropwizard is done with Jdbi and Hibernate. Jdbi23 is built on top of JDBC to improve JDBC’s rough interface, providing a more natural Java database interface that is easy to bind to your domain data types. Unlike an ORM (Object Relational Mapping), it does not aim to provide a complete object relational mapping framework—instead of that hidden complexity, it provides building blocks to construct the mapping between relations and objects as appropriate for your application. Hibernate24 is the most popular ORM framework used by Java developers.

Dropwizard has a longer history than Spring Boot. In fact, Spring Boot was motivated by the success of Dropwizard. It was first released by Coda Hale25 in late 2011 while he was at Yammer. However, with better developer experience, strong community support, and the backing from Pivotal, Spring Boot is now a better option than Dropwizard.

Vert.x

Vert.x26 is toolkit for building reactive27 applications on the JVM (Java Virtual Machine), which supports multiple languages, including Java, JavaScript, Groovy, Ruby, Ceylon, Scala, and Kotlin. It started as an open source project under the Eclipse Foundation in 2012 by Tim Fox. Even before microservices became mainstream, Vert.x had a powerful stack to build microservices. Unlike Dropwizard, Vert.x is an unopinionated toolkit. In other words, it is not a restrictive framework or container, which preaches developers a specific way to write applications. Instead, Vert.x provides a lot of useful bricks and lets developers create their own apps the way they want to.

Like in Dropwizard, Vert.x also supports integrating Metrics, to gather telemetric data related to a running application. Further, it also supports integrating with Hawkular28, which is a set of open source projects designed to be a generic solution for common monitoring problems. For service discovery, Vert.x integrates HashiCorp Consul29. Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Vert.x supports integrating with multiple message brokers—for example with Apache Kafka and RabbitMQ—and multiple messaging protocols—for example AMQP, MQTT, JMS, and STOMP.

Overall Vert.x is has a powerful ecosystem to build microservices, but yet Spring Boot, with the strong support from the Spring community, has the edge.

Lagom

Lagom30 is an open source opinionated framework for building microservice systems in Java or Scala based on reactive principles31. Lagom is a Swedish word, which means just right or sufficient. It is built on top of the Akka32 and Play33 framework. Akka is a toolkit for building highly concurrent, distributed, and resilient message-driven applications for Java and Scala. Play is a high-productivity Java and Scala web application framework that integrates the components and APIs one needs for modern web application development. In Lagom, microservices are based on the following:
  • Akka Actors—providing isolation through a shared nothing architecture based on the Actor Model

  • Akka Cluster—providing resilience, sharding, replication, scalability, and load-balancing of the groups of individual isolated service instances making up a microservice

  • ConductR—providing isolation down to the metal and runtime management of the microservice instances34.

Lagom is based on three design principles: message driven and asynchronous communication, distributed persistence, and developer productivity. It makes asynchronous communication the default, and the default persistence model is using event sourcing and CQRS (discussed in Chapter 10, “APIs, Events, and Streams”)—using Akka Persistence and Cassandra, which is very scalable and easy to replicate and to make fully resilient.

Lagom is a promising, but relatively new, framework for microservices.

Getting Started with Spring Boot

In this section, we see how we can develop microservices using Spring Boot from scratch. We also see how to implement some of the design concepts we learned so far, as we go on. To run the examples, you will need Java 835 or latest, Maven 3.236 or latest, and a Git client. Once you have successfully installed them, run the following two commands in the command line to make sure everything is working fine. If you’d like some help in setting up Java or Maven, there are plenty of online resources out there.
>java -version
java version "1.8.0_121" Java(TM) SE Runtime Environment (build 1.8.0_121-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
>mvn -version
Apache Maven 3.5.0 (ff8f5e7444045639af65f6095c62210b5713f426; 2017-04-03T12:39:06-07:00)
Maven home: /usr/local/Cellar/maven/3.5.0/libexec
Java version: 1.8.0_121, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_121.jdk/Contents/Home/jre Default locale: en_US, platform encoding: UTF-8 OS name: "mac os x", version: "10.12.6", arch: "x86_64", family: "mac
All the samples used in this book are available at the https://github.com/microservices-for-enterprise/samples.git Git repository. Use the following git command to clone it. All the samples related to this chapter are inside the ch04 directory.
> git clone https://github.com/microservices-for-enterprise/samples.git
> cd samples/ch04

For anyone who loves Maven, the best way to get started with a Spring Boot project would be with a Maven archetype. Unfortunately, it is no longer supported. One option we have is to create a template project via https://start.spring.io/ , which is known as the Spring Initializer . There you can pick which type of project you want to create, choose project dependencies, give it a name, and download a Maven project as a ZIP file. The other option is to use the Spring Tool Suite (STS)37 . It’s an IDE (integrated development environment) built on top of the Eclipse platform, with many useful plugins to create Spring projects. However, in this book, we are going to provide you with all the fully coded samples in the Git38 repository.

Note

If you find any issues in building or running the samples given in this book, please refer to the README file under the corresponding chapter in the Git repository: https://github.com/microservices-for-enterprise/samples.git . We will update the samples and the corresponding README files in the Git repository to reflect any changes or updates related to the tools, libraries, and frameworks used in this book.

Hello World!

This is the simplest microservice ever. You can find the code inside the ch04/sample01 directory. To build the project with Maven, use the following command:
> cd sample01
> mvn clean install

Before we delve deep into the code, let’s look at some of the notable Maven dependencies and plugins added to ch04/sample01/pom.xml

Spring Boot comes with different starter dependencies to integrate with different Spring modules. The spring-boot-starter-web dependency brings in Tomcat and Spring MVC and does all the wiring between the components, keeping the developer’s work to a minimum. The spring-boot-starter-actuator dependency brings in production-ready features to help you monitor and manage your application.
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
In the pom.xml file, we also have the spring-boot-maven-plugin plugin, which lets you start the Spring Boot service from Maven.
<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
Now let’s look at the checkOrderStatus method in the class file src/main/java/com/apress/ch04/sample01/service/OrderProcessing.java. This method accepts an order ID and returns the status of the order. There are three notable annotations used in the following code. The @RestController is a class-level annotation that marks the corresponding class as a REST endpoint, which accepts and produces JSON payloads. The @RequestMapping annotation can be defined both at the class level and the method level. The value attribute at the class-level annotation defines the path under which the corresponding endpoint is registered. The same at the method level appends to the class-level path. Anything defined in curly braces is a placeholder for any variable value in the path. For example, a GET request on /order/101 and /order/102 (where 101 and 102 are the order IDs) hits the method checkOrderStatus. In fact, the value of the value attribute is a URI template39. The annotation @PathVariable extracts the provided variable from the URI template defined under the value attribute of the @RequestMapping annotation and binds it to the variable defined in the method signature.
@RestController
@RequestMapping(value = "/order")
public class OrderProcessing {
    @RequestMapping(value = "/{id}", method = RequestMethod.GET)
    public String checkOrderStatus
        (@PathVariable("id") String orderId)
    {
        return ResponseEntity.ok("{'status' : 'shipped'}");
    }
}
There is another important class file at src/main/java/com/apress/ch04/sample01/OrderProcessingApp.java worth looking at. This is the class that spins up our microservice in its own application server, in this case the embedded Tomcat. By default, it starts on port 8080, and you can change the port by adding, say for example, server.port = 9000 to the src/main/resources/application.properties file. This will set the server port to 9000. The following shows the code snippet from that OrderProcessingApp class, which spins up our microservice. The @SpringBootApplication annotation, which is defined at the class level, is being used as a shortcut for four other annotations defined in Spring: @Configuration, @EnableAutoConfiguration, @EnableWebMvc, and @ComponentScan.
@SpringBootApplication
public class OrderProcessingApp {
    public static void main(String[] args) {
        SpringApplication.run(OrderProcessingApp.class, args);
    }
}
Now, let’s see how to run our microservice and talk to it with a cURL client. The following command executed from the ch04/sample01 directory shows how to start our Spring Boot application with Maven.
> mvn spring-boot:run
To test the microservice with a cURL client, use the following command from a different command console. It will print the output as shown here, after the initial command.
> curl http://localhost:8080/order/11
{"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items": [{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}

Spring Boot Actuator

Gathering telemetric data about a running microservice is extremely important. This is discussed in detail in Chapter 13. In this section, we explore some of the monitoring capabilities provided in Spring Boot out-of-the-box via the actuator endpoint40. There are multiple services running behind the actuator endpoint, and most of them are enabled by default. As discussed in the previous section, we only need to add a dependency to spring-boot-starter-actuator to enable it. Let’s keep the Spring Boot application from the previous example up and running and execute a set of cURL commands.

The following cURL command, which does a GET to the actuator/health endpoint, returns the server status.
> curl http://localhost:8080/actuator/health
{"status":"UP"}
Spring Boot exposes telemetric data over HTTP and JMX. Due to security reasons, not all of them are exposed via HTTP, only the health and info services. At our own risk, let’s see how to enable the httptrace endpoint over HTTP. You need to add the following to the src/main/resources/application.properties file and reboot the Spring Boot application.
management.endpoints.web.exposure.include = health,info,httptrace
Now let’s hit our microservice a few times.
> curl http://localhost:8080/order/11
> curl http://localhost:8080/order/11
> curl http://localhost:8080/order/11
The following cURL command will invoke the httptrace endpoint , which will return the HTTP trace information.
> curl http://localhost:8080/actuator/httptrace
{
   "traces":[
      {
         "timestamp":"2018-03-29T16:42:46.235Z",
         "principal":null,
         "session":null,
         "request":{
            "method":"GET",
            "uri":"http://localhost:8080/order/11",
            "headers":{
               "host":[
                  "localhost:8080"
               ],
               "user-agent":[
                  "curl/7.54.0"
               ],
               "accept":[
                  "*/*"
               ]
            },
            "remoteAddress":null
         },
         "response":{
            "status":200,
            "headers":{
               "Content-Type":[
                  "text/plain;charset=UTF-8"
               ],
               "Content-Length":[
                  "7"
               ],
               "Date":[
                  "Thu, 29 Mar 2018 16:42:46 GMT"
               ]
            }
         },
         "timeTaken":14
      }
   ]
}

Configuration Server

In Chapter 2, under the 12-factor app, we discussed that the configuration factor emphasizes the need to decouple environment-specific settings from the code to configuration. For example, the connection URL of the LDAP or the database server are environment-specific parameters and certificates. Spring Boot provides a way to share the configuration between microservices via a configuration server. Multiple instances of microservices can connect to this server and, over HTTP, load the configuration. The configuration server can either maintain the configuration in its own local filesystem or load from Git. Loading from Git would be the ideal approach. The configuration server itself is another Spring Boot application. You can find the code for the configuration server inside ch04/sample02.

Let’s look at some of the additional notable Maven dependencies added to ch04/sample02/pom.xml. The spring-cloud-config-server dependency brings all the components to turn a Spring Boot application into a configuration server.
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-config-server</artifactId>
</dependency>
To turn a Spring Boot application into a configuration server, you only need to add one class-level annotation to src/main/java/com/apress/cho4/sample02/ConfigServerApp.java, as shown in the following code snippet. The @EnableConfigServer annotation will do all the internal wiring between the Spring modules to expose the Spring Boot application as a configuration server. This is the only code we need.
@SpringBootApplication
@EnableConfigServer
public class ConfigServerApp {
    public static void main(String[] args) {
        SpringApplication.run(ConfigServerApp.class, args);
    }
}
In this example, the configuration server loads the configuration from the local filesystem. You will find that the following two properties are added to the src/main/resources/application.properties file to change server port to 9000 and to use the native profile for the configuration server. When the native profile is used, the configurations are loaded from the local filesystem, not from Git.
server.port=9000
spring.profiles.active=native
Now we can create property files inside the src/main/resources directory for each microservice. You will find the following content in the src/main/resources/sample01.properties file, which can be used to define all the configuration parameters required for a given microservice.
database.connection = jdbc:mysql://localhost:3306/sample01
Now, let’s start our configuration server with the following commands. First we build the project, and then launch the server via the Maven plugin.
> cd sample02
> mvn clean install
> mvn spring-boot:run
Use the following cURL command to load all the configurations related to the sample01 microservice.
> curl http://localhost:9000/sample01/default
{
   "name":"sample01",
   "profiles":[
      "default"
   ],
   "label":null,
   "version":null,
   "state":null,
   "propertySources":[
      {
         "name":"classpath:/sample01.properties",
         "source":{
           "database.connection":"jdbc:mysql://localhost:3306/sample01"
         }
      }
   ]
}

Consuming Configurations

In this section, we see how to consume a property loaded from an external configuration within another microservice. You can find the code for this microservice inside ch04/sample03. It is in fact a slightly modified version of sample01. Let’s start by looking at ch04/sample03/pom.xml for the additional notable Maven dependencies. The spring-cloud-starter-config dependency brings all the components required to bind the property values read from the remote configuration server to local variables.
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-config</artifactId>
</dependency>
The following code snippet shows the modified sample03/src/main/java/com/apress/ch04/sample03/service/OrderProcessing.java class, where the dbUrl variable is bound to the database.connection property (via @Value annotation) and is read from the configuration server. The properties are loaded from the configuration server at the startup, so we need to make sure the configuration server is up and running before we start this microservice.
@RestController
@RequestMapping(value = "/order")
public class OrderProcessing {
    @Value("${database.connection}")
    String dbUrl;
    @RequestMapping(value = "/{id}", method = RequestMethod.GET)
    public ResponseEntity<?> checkOrderStatus
            (@PathVariable("id") String orderId) {
        // print the value of the dbUrl
        // loaded from configuration server
        System.out.println("DB Url: " + dbUrl);
        return ResponseEntity.ok("{'status' : 'shipped'}");
    }
}
To set a value for the configuration server URL, we need to add the following property to src/main/resources/bootstrap.properties.
spring.cloud.config.uri=http://localhost:9000
We also need to add the spring.application.name property to src/main/resources/application.properties with a value corresponding to a property filename in the configuration server.
spring.application.name=sample03
Now, let’s start our configuration client with the following commands. First we build the project, and then we launch the server via the Maven plugin. Also, we need to make sure that we already have the configuration service running, which we discussed in the previous section.
> cd sample03
> mvn clean install
> mvn spring-boot:run
Use the following cURL command to load all the configurations related to the sample03 microservice.
> curl http://localhost:8080/order/11
If it all works fine, you will find the following output in the command console, which runs the Spring Boot service (sample03).
DB Url: jdbc:mysql://localhost:3306/sample03

Service-to-Service Communication

In this section, we see how one microservice directly talks to another microservice over HTTP. We extend our domain-driven design example from Chapter 2. As per Figure 4-2, the Order Processing microservice (sample01) talks directly to the Inventory microservice (sample04) over HTTP to update the inventory.
../images/461146_1_En_4_Chapter/461146_1_En_4_Fig2_HTML.jpg
Figure 4-2

Communication between microservices

First, let’s get the Inventory microservice up and running. You can find the code for this microservice inside ch04/sample04. It’s another Spring Boot application, and there isn’t anything new from what we have discussed so far. Let’s look at the updateItems method in the src/main/java/com/apress/ch04/sample04/service/Inventory.java class. This method simply accepts an array of items , iterates through them, and prints the item code. Later in this section, the Order Processing microservice (sample01) will invoke this method to update the inventory.
@RestController
@RequestMapping(value = "/inventory")
public class Inventory {
    @RequestMapping(method = RequestMethod.PUT)
    public ResponseEntity<?> updateItems(@RequestBody Item[] items) {
        if (items == null || items.length == 0) {
            return ResponseEntity.badRequest().build();
        }
        for (Item item : items) {
            if (item != null) {
                System.out.println(item.getCode());
            }
        }
        return ResponseEntity.noContent().build();
    }
}
Let’s spin up the sample04 microservice with the following Maven command. It will start on port 9000 (make sure to stop any of the microservices we started before, and ensure that no other service is running on port 9000).
> cd sample04
> mvn clean install
> mvn spring-boot:run
Now let’s revisit our sample01 code, which is at ch04/sample01. There we have the createOrder method, which accepts an order, and then talks to the Inventory microservice (sample04). You can find the corresponding code inside the src/main/java/com/apress/ch04/sample01/service/OrderProcessing.java class.
@RequestMapping(method = RequestMethod.POST)
public ResponseEntity<?> createOrder(@RequestBody Order order) {
    if (order != null) {
        RestTemplate restTemplate = new RestTemplate();
        URI uri = URI.create("http://localhost:9000/inventory");
        restTemplate.put(uri, order.getItems());
        order.setOrderId(UUID.randomUUID().toString());
        URI location = ServletUriComponentsBuilder
            .fromCurrentRequest().path("/{id}")
            .buildAndExpand(order.getOrderId())
            .toUri();
        return ResponseEntity.created(location).build();
    }
    return ResponseEntity.status(HttpStatus.BAD_REQUEST).build();
}
Let’s spin up the sample01 microservice with the following Maven command. It will start on port 8080.
> cd sample01
> mvn clean install
> mvn spring-boot:run
To test the completed end-to-end flow , let’s use the following command.
> curl -v -H "Content-Type: application/json" -d '{"customer_id":"101021","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}' http://localhost:8080/order
HTTP/1.1 201
Location: http://localhost:8080/order/b3a28d20-c086-4469-aab8-befcf2ba3345

You will also notice that the item codes are printed on the console where the sample04 microservice is running.

Getting Started with gRPC

In Chapter 3, we discussed the fundamentals of gRPC. In this section, we see how one microservice communicates with another microservice via gRPC. In the previous section, we saw how the Order Processing microservice (sample01) talks directly to the Inventory microservice (sample04) over HTTP to update inventory. Here we have modified the sample01 and sample04 microservices and built two new microservices, sample06 and sample05. The sample05 or the Inventory microservice acts as a gRPC server and the sample06 or the Order Processing microservice acts as a gRPC client. Source code for these two microservices is available under ch04/sample05 and ch04/sample06. Figure 4-3 shows the setup for this exercise.
../images/461146_1_En_4_Chapter/461146_1_En_4_Fig3_HTML.jpg
Figure 4-3

Communication between microservices via gRPC

Building the gRPC Service

First we need to create an IDL (Interface Definition Language). You can find it under sample05/src/main/proto/InventoryService.proto. Later we will use a Maven plugin to build the Java classes from this IDL file, and it will by default look for the location sample05/src/main/proto/. Unless you want to change the plugin configuration, make sure the IDL file is in this location. The following code shows the contents of the IDL file .
syntax = "proto3";
option java_multiple_files = true;
package com.apress.ch04.sample05.service;
message Item {
    string code = 1;
    int32 qty = 2;
}
message ItemList {
    repeated Item item = 1;
}
message UpdateItemsResp {
        string code = 1;
}
service InventoryService {
    rpc updateItems(ItemList) returns (UpdateItemsResp);
}
Now let’s see what is new in our Maven pom.xml file . There are multiple dependencies added, but the only notable dependency is the grpc-spring-boot-starter. This takes care of the @GRpcService annotation in the sample05/src/main/java/com/apress/ch04/sample05/service/Inventory.java class, which spins up the Spring Boot application and exposes our microservice over gRPC.
<dependency>
    <groupId>org.lognet</groupId>
    <artifactId>grpc-spring-boot-starter</artifactId>
    <version>0.0.6</version>
</dependency>
We also need to add one new extension and a new Maven plugin to the pom.xml file. The os-maven-plugin extension will determine the operating system of the current setup and pass that information to the protobuf-maven-plugin plugin. The protobuf-maven-plugin plugin is used to generate the Java classes from the IDL.
………………………………
<extension>
    <groupId>kr.motd.maven</groupId>
    <artifactId>os-maven-plugin</artifactId>
    <version>1.4.1.Final</version>
</extension>
………………………………
<plugin>
    <groupId>org.xolstice.maven.plugins</groupId>
    <artifactId>protobuf-maven-plugin</artifactId>
    <version>0.5.0</version>
      ………………………………
</plugin
The following Maven command can be used to create the Java classes from the IDL . By default they are created under the target/generated-sources/protobuf/grpc-java and target/generated-sources/protobuf/java directories.
> cd sample05
> mvn package
Now let’s look at our service code. The Inventory (sample05/src/main/java/com/apress/ch04/sample05/services/Inventory.java) class extends InventoryServiceImplBase, which is a class generated by the Maven plugin. Once the Order Processing microservice does a POST to the Inventory microservice, it will simply print item codes and return.
@GRpcService
public class Inventory extends InventoryServiceImplBase{
    @Override
    public void updateItems(ItemList request,
                            StreamObserver<UpdateItemsResp>
                            responseObserver)
    {
        List<Item> items = request.getItemList();
        for (Item item : items) {
            System.out.println(item.getCode());
        }
        responseObserver.onNext(UpdateItemsResp.newBuilder()
                .setCode("success").build());
        responseObserver.onCompleted();
    }
}
To spin up the Inventory microservice and expose it over gRPC, use the following Maven command.
> cd sample05
> mvn spring-boot:run
Once the server starts, it looks for the following line. By default it starts on 6565 and you can change it by adding the property grpc.port to the application.properties file and setting its value to the port number you need. In our case, we have set it to 7000 (grpc.port=7000).
gRPC Server started, listening on port 7000

Building the gRPC Client

Now let’s see how to build a gRPC client application. In fact, in our case, the gRPC client we are building is another microservice. Just like in the gRPC service, first we need to create an IDL (Interface Definition Language). This is the same IDL we used for the gRPC service. You can find it under sample06/src/main/proto/InventoryService.proto. Later we will use a Maven plugin to build the Java classes from this IDL file, and it will by default look for the location sample06/src/main/proto/.

The only two additions to the client-side pom.xml are the os-maven-plugin extension and the protobuf-maven-plugin plugin. The protobuf-maven-plugin plugin is used to generate the Java classes from the IDL. Also, note that we do not need to add a dependency to grpc-spring-boot-starter. The following Maven command can be used to create the Java classes from the IDL. By default, they are created under the target/generated-sources/protobuf/grpc-java and target/generated-sources/protobuf/java directories.
> cd sample06
> mvn package
Now let’s look at our client code. The Order Processing (sample06/src/main/java/com/apress/ch04/sample06/service/OrderProcessing.java) microservice calls the Inventory microservice, via the InventoryClient (com/apress/ch04/sample06/InventoryClient.java) class. The following shows the source code of the InventoryClient. It uses the InventoryServiceBlockingStub, which is a class generated from the IDL, to talk to the gRPC service.
public class InventoryClient {
    ManagedChannel managedChannel;
    InventoryServiceBlockingStub stub;
    public void updateItems
            (com.apress.ch04.sample06.model.Item[] items) {
        ItemList itemList = null;
        for (int i = 0; i < items.length; i++) {
          Item item;
          item = Item.newBuilder().setCode(items[i].getCode())
                    .setQty(items[i].getQty()).build();
           if (itemList != null
                && itemList.getItemList().size() > 0) {
            itemList = ItemList.newBuilder(itemList)
                        .addItem(i, item).build();
          } else {
            itemList = ItemList.newBuilder()
                    .addItem(0, item).build();
            }
        }
        managedChannel = ManagedChannelBuilder
                        .forAddress("localhost", 7000)
                        .usePlaintext(true).build();
        stub = InventoryServiceGrpc
                        .newBlockingStub(managedChannel);
        stub.updateItems(itemList);
    }
}
To spin up the Order Processing microservice, which is also our gRPC client, use the following Maven command.
> cd sample06
> mvn spring-boot:run
To test the completed end-to-end flow, let’s use the following command to create an order in the Order Processing microservice.
> curl -v -H "Content-Type: application/json" -d '{"customer_id":"101021","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}' http://localhost:8080/order
HTTP/1.1 201
Location: http://localhost:8080/order/17ff6fda-13b3-419f-9134-8abfec140e47

You will also notice that the item codes are printed on the console where the sample05 microservice is running.

Event-Driven Microservices with Kafka

In Chapter 3, we discussed the fundamentals of event-driven microservices and Kafka. In this section, we see how one microservice communicates with another microservice via asynchronous messaging. Going by the design we presented in Chapter 2, the Order Processing microservice (sample07) publishes an event to Kafka upon completing processing an order, and the Billing (sample08) microservice listens to the same event, consumes the message, and once billing is completed, publishes another event to Kafka. The sample07 or the Order Processing microservice acts as the event publisher (or the event source) and the sample08 or the Billing microservice acts as the event consumer (or the event sink). The source code of these two microservices is available under ch04/sample07 and ch04/sample08. Figure 4-4 shows the interactions between microservices and the message broker (Kafka).
../images/461146_1_En_4_Chapter/461146_1_En_4_Fig4_HTML.jpg
Figure 4-4

Event-driven microservices with Kafka

Setting Up a Kafka Message Broker

In this section we discuss how to set up a Kafka message broker. For the recommendations on a production deployment, always refer to the Kafka official documentation41.

First we need to download the latest Kafka distribution. For the samples in the book, we are using version 2.11-1.1. Once the Kafka distribution is downloaded and unzipped, you can run the following command to start ZooKeeper42. Kafka uses ZooKeeper, so you need to first start a ZooKeeper server if you don’t already have one. You can use the convenience script packaged with Kafka to get a quick-and-dirty single-node ZooKeeper instance.
> cd kafka_2.11-1.1.0
> bin/zookeeper-server-start.sh config/zookeeper.propertie
Now we can use the following command in a different console to start the Kafka server.
> cd kafka_2.11-1.1.0
> bin/kafka-server-start.sh config/server.properties
For our use case, we need two topics: ORDER_PROCESSING_COMPLETED and PAYMENT_PROCESSING_COMPLETED. Let’s create the topics using the following commands.
> cd kafka_2.11-1.1.0
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic ORDER_PROCESSING_COMPLETED
Created topic "ORDER_PROCESSING_COMPLETED"
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic PAYMENT_PROCESSING_COMPLETED
Created topic "PAYMENT_PROCESSING_COMPLETED"
You can use the following command to view all the topics available in the message broker.
> cd kafka_2.11-1.1.0
> bin/kafka-topics.sh --list --zookeeper localhost:2181
ORDER_PROCESSING_COMPLETED
PAYMENT_PROCESSING_COMPLETED

Building Publisher (Event Source)

In this section, we discuss how to build the Order Processing microservice to publish events to the ORDER_PROCESSING_COMPLETED topic. Let’s look at the notable dependencies added to the pom.xml file inside sample07. These two dependencies take care of the @EnableBinding annotation in the sample07/src/main/java/com/apress/ch04/sample07/OrderProcessingApp.java class and all the dependencies related to Kafka.
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
Now let’s look at the code, which publishes the event to the message broker. When the Order Processing (src/main/java/com/apress/ch04/sample07/service/OrderProcessing.java) microservice gets an order update, it talks to the OrderPublisher (src/main/java/com/apress/ch04/sample07/OrderPublisher.java) class to publish the event.
@Service
public class OrderPublisher {
    @Autowired
    private Source source;
    public void publish(Order order) {
      source.output().send(MessageBuilder.withPayload(order).build());
    }
}
The name of the topic and the message broker information are picked from the /src/main/resources/application.properties file.
spring.cloud.stream.bindings.output.destination:ORDER_PROCESSING_COMPLETED
spring.cloud.stream.bindings.output.content-type:application/json
spring.cloud.stream.kafka.binder.zkNodes:localhost
spring.cloud.stream.kafka.binder.zkNodes.brokers: localhost
Use the following Maven command to spin up the publisher microservice.
> cd sample07
> mvn spring-boot:run

Building Consumer (Event Sink)

In this section, we discuss how to build the Inventory microservice (sample08) to consume messages published to the ORDER_PROCESSING_COMPLETED topic. The Maven dependencies used in this example are the same as the ones before, and they cater the same purpose. Let’s straightway look at the code, which consumes messages from the topic. The InventoryApp class (sample08/src/main/java/com/apress/ch04/sample08/InventoryApp.java) in the consumeOderUpdates method just reads from the topic and prints the order ID.
@SpringBootApplication
@EnableBinding(Sink.class)
public class InventoryApp {
    public static void main(String[] args) {
        SpringApplication.run(InventoryApp.class, args);
    }
    @StreamListener(Sink.INPUT)
    public void consumeOderUpdates(Order order) {
        System.out.println(order.getOrderId());
    }
}
The name of the topic and the message broker information are picked from the sample08/src/main/resources/application.properties file.
server.port=9000
spring.cloud.stream.bindings.input.destination:ORDER_PROCESSING_COMPLETED
spring.cloud.stream.bindings.input.content-type:application/json
spring.cloud.stream.kafka.binder.zkNodes:localhost
spring.cloud.stream.kafka.binder.zkNodes.brokers: localhost
Use the following Maven command to spin up the consumer microservice.
> cd sample08
> mvn spring-boot:run
To test the complete end-to-end flow, let’s use the following command to create an order via the Order Processing microservice.
> curl -v -H "Content-Type: application/json" -d '{"customer_id":"101021","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}' http://localhost:8080/order
HTTP/1.1 201
Location: http://localhost:8080/order/5f8ecb9c-8146-4021-aaad-b6b1d69fb80f

Now if you look at the console running the Inventory microservice (sample08), you find that the order ID is printed there.

You can also view the messages published to the ORDER_PROCESSING_COMPLETED topic with the following command.
> cd kafka_2.11-1.1.0
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic ORDER_PROCESSING_COMPLETED
{"customer_id":"101021","order_id":"d203e371-2a8a-4a4c-a286-11e5b723f3d7","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA2"}

Building GraphQL Services

In Chapter 3, we discussed how GraphQL is used in some synchronous messaging scenarios in which the RESTful service architecture won’t fit. Let’s try to build a GraphQL-based service using Spring Boot. You can find the complete code for this available under ch04/sample09.

By adding the graphql-spring-boot-starter43 dependency to the project, you can get a GraphQL server running. Along with the GraphQL Java Tools library, you only need to write the code necessary for your service. First, we need to make sure that we have the following dependencies in our project.
<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-spring-boot-starter</artifactId>
    <version>3.6.0</version>
</dependency>
<dependency>
    <groupId>com.graphql-java</groupId>
    <artifactId>graphql-java-tools</artifactId>
    <version>3.2.0</version>
</dependency>

Spring Boot will pick up the corresponding dependencies and set up the appropriate handlers to work automatically. This will expose the GraphQL service at the /graphql endpoint by default (configured in the /src/main/resources/application.properties file under sample09) and will accept POST requests containing GraphQL payloads. The GraphQL Tools library works by processing GraphQL schema files to build the correct structure and then wires special beans to this structure. The Spring Boot GraphQL Starter (graphql-spring-boot-starter) automatically finds these schema files. Those files need to be saved with the extension .graphqls and can be present anywhere in the classpath.

One mandatory requirement here is that there must be exactly one root query, and up to one root mutation. The root query needs to have special beans defined in the Spring context to handle various fields in it. The main requirements are that the beans implement the GraphQLQueryResolver interface and that every field in the root query from the schema have a method in one of those classes with the same name.
public class Query implements GraphQLQueryResolver {
    private BookDao bookDao;
    public List<Book> getRecentBooks(int count, int offset) {
        return bookDao.getRecentBooks(count, offset);
    }
}

The method signature must have arguments corresponding to each parameter in the GraphQL schema. It should also return the correct return type for the type in the GraphQL schema. Any simple types—String, Int, List, etc.—can be used with the equivalent Java types, and the system just maps them automatically.

The getRecentBooks method in the previous code snippet handles any GraphQL queries for the recentBooks field in the schema defined earlier.
public class Book {
    private String id;
    private String title;
    private String category;
    private String authorId;
}

A Java bean represents every complex type in the GraphQL server. GraphQL also has a companion tool called GraphiQL44. This is a user interface (UI) that can communicate with any GraphQL server and execute queries and mutations against it. Fields inside the Java bean will directly map to the fields in the GraphQL response based on the name of the field. The instructions on how to run sample09 is available in the README file under ch04/sample09 directory, in the samples Git repository.

Summary

In this chapter, we discussed how to build microservices with Spring Boot. The chapter started with a discussion of the different tools and developer frameworks available for microservice developers. Then we delved deep into building microservices, with four different communication protocols—REST, gRPC, Kafka, and GraphQL. In the next chapter, we focus on the data management aspect of microservices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset