The microservices architecture fosters building a software application as a suite of independent services. When we have to realize a given business use case, we often need to have the communication and coordination between multiple microservices. Therefore, integrating microservices and building inter-service communication has become the one of the most challenging tasks needed to realize the microservices architecture.
In most of the existing books and other resources, the concepts of microservice integration is barely discussed or is explained in a very abstract manner. Therefore, in this chapter, we delve deep into the key challenges of integrating microservices, patterns for integrating microservices, and frameworks and languages that we can leverage.
Why We Have to Integrate Microservices
Microservices are designed to address specific fine-grained business functionality. Therefore, when you are building a software application using microservices, you have to build a communication structure between these services. As discussed in Chapter 1, “The Case for Microservices,” when you are using SOA, you build a set of services (web services) and integrate them using a central bus known as an Enterprise Service Bus (ESB).
When you do service integration using an ESB, you will have a set of virtual services that are tightly coupled into the centralized ESB runtime. As the ESB layer grows with business and network communication logic, it becomes one gigantic monolithic application in most enterprises. This means it has all the limitations of a monolithic application that we discussed in Chapter 1.
Microservice composition: Creating a composite service out of the existing microservices and exposing that as a business functionality to the consumers is one of the most common use cases in microservices architecture. The composition can be built using synchronous communication (active) or using asynchronous (reactive) communication patterns.
Building resilient inter-service communication: All microservice calls take place on the network and are prone to failures. Therefore, we need to implement the stability and resiliency patterns when we make inter-service calls.
Granular services and APIs: Most microservices are too fine-grained to be published as a business functionality/API for the consumers.
Microservices in brownfield enterprises: Microservices in enterprise applications need integration between existing legacy systems, proprietary systems (e.g., ERP systems), databases, and web APIs (e.g., Salesforce).
A microservices architecture favors an alternative approach to using a centralized ESB, which is known as smart endpoints and dumb pipes. All the requirements that we discussed above needs to be implement for microservices too. Let’s have a closer look at the concept of smart endpoints and dumb pipes.
Smart Endpoints and Dumb Pipes
Although this approach looks far more elegant than the conventional centralized ESB, there are several complexities that the developers have to deal with. First and foremost, we should clearly understand that this approach doesn’t remove any of the complexities in business or network communication in the ESB approach. Which means that you need to implement all the capabilities that your service integration logic requires, as part of your service. For instance, microservice-1 should contain the composition of multiple data types from the Order Processing and Shopping Cart microservices and resilient communication to invoke those services (such as circuit breakers, failovers, etc.). It must also include any other cross-cutting capabilities that you will require (such as security and observability). Also, if you are using polyglot microservices technologies, it’s more likely that you have to repeat the same implementation for commodity features such as resilient communication using multiple technologies.
It’s crucial to take these requirements of microservice integration into consideration when picking technologies for implementation. We’ll dive into the specifics of the requirements and the technologies that fit those requirements in the latter half of this chapter. But before that, it’s important to discuss some of the common anti-patterns related to microservices integration that you should avoid.
Anti-Patterns of Microservice Integration
There are several anti-patterns in integrating microservices that we should be aware of. Most of these patterns have emerged because of the complexity of integrating microservices and trying to replicate the same set of features that a centralized ESB offers with the microservices architecture.
Monolithic API Gateway for Integrating Microservices
The Netflix API is the “front door” to the Netflix ecosystem of microservices. As requests come from devices, the API provides the logic of composing calls to all services that are required to construct a response. It gathers whatever information it needs from the backend services, in whatever order needed, formats and filters the data as necessary, and returns the response. So, at its core, the Netflix API is an orchestration service that exposes coarse-grained APIs by composing fined grained functionality provided by the microservices.
You can clearly observe that the orchestration layer , which is a monolithic component, contains a significant portion of the business logic in this scenario. This leads to numerous trade-offs that are associated with monolithic applications that we discussed in earlier chapters (such as no failure isolation, can’t scale independently, ownership issues, etc.).
This approach is pretty much the same as introducing an integration service that is not part of a monolithic runtime and enforces the API gateway related functionality as part of the service runtime. They also tried another alternative of keeping the API gateway as dumb as possible and introducing a composite service where API gateway simply acts as a pass-through runtime.
The key takeaway from this use case is that you shouldn’t be using an API gateway as a monolithic runtime to put the business logic. The service integration or composition logic must be part of another microservice (either at the API gateway layer or at the services layer).
Integrating Microservice with an ESB
There are some microservices implementations that bring in ESB back to a microservices architecture, by using ESB as a runtime to implement the service integration. In most cases, ESB is deployed in a container to serve the service integration of a specific use case. However, ESB has inherent limitations, such as too bulky to be run as a container, not so developer friendly because of the configuration-based integration, etc. In fact, there are some ESB vendors who try to promote this pattern, but this is something that you should avoid while integrating microservices. (There are also container friendly and lightweight versions of ESBs which can be used to independently integrate microservices, which is far better than using a central ESB.)
Using Homogeneous Technologies to Build all Your Microservices
We’ve discussed earlier that smart endpoints and dumb pipes literally means that all the cool features that we get out-of-the-box with ESBs now have to be implemented as part of our service logic. When we develop microservices, we need to consider that not all microservices are similar. There are certain services that will focus more on the business logic and computations, while some services are more about inter-service communications and network calls. If we stick to a single homogenous set of technologies to build all these microservices, and then we will have to put more effort into building the core components for integrating microservices than focusing on the business logic of the service. For example, service integration often requires service discover and resilient communication (such as circuit breakers). Some frameworks or programming languages offer these capabilities out-of-the-box while some don’t. Therefore, your architecture should be flexible enough to pick the right technology for the job.
Organizing Microservices
Identifying different types of microservices based on their interactions and using the most appropriate technologies to build them is the key to building a successful microservices architecture. If we take a closer look at the microservices implementation, we can identify different types of services that we can categorize into a few different categories. Based on service functionalities and granularities, we can identify the following service categories.
Core Services
There are microservices that are fine-grained, self-contained (with no external service dependencies) and mostly consist of the business logic with little or no network communication logic. Given that these services do not have significant network communication functionalities, you are free to select any service implementation technology that can fulfill the service’s business logic. Also, these services may have their own private databases that are leveraged to build the business logic. Such microservices can be categorized as core or atomic microservices.
Integration Services
Core microservices often cannot be directly mapped to business functionalities, as they are too fine-grained. And any realistic business capability would require the interactions or composition of multiple microservices. Such interactions or compositions are implemented as an integration service or a composite service. These services often have to support a significant portion of ESB functionalities such as routing, transformations, orchestration, resiliency and stability patterns etc., at the service level itself.
Integration services serve a composite business functionality, are independent from each other, and contain business logic (routing, what services to call, how to do data type mapping, etc.) and network communication logic (inter-service communication through various protocols and resiliency behaviors such as circuit breakers). Also, they may or may not have private databases associated with the business functionality of the service. These services can bridge the other legacy and proprietary systems (e.g., ERP systems), external web APIs (e.g., Salesforce), shared databases, etc. (often known as the anti-corruption layer).
It’s very important to select the appropriate service development technology for building integration microservices. Since network communication is a critical part of integration services, you should select the most suitable technology for implementing these services. In the latter part of this chapter, we discuss the technologies and frameworks that are suitable for building these services.
API Services
You will expose a selected set of your composite services or even some core services as managed APIs using API services or edge services. These services are a special type of integration services, which apply basic routing capabilities, versioning of APIs, API security, throttling, monetization, API compositions, etc.
In most of these microservices implementations, API services are implemented as part of a monolithic API gateway runtime, and this violates the core microservices architectural concepts. However, most of the API gateway solutions are now moving toward a micro-gateway capability in which you can deploy your API services on an independent and lightweight runtime, while you manage them centrally. When it comes to implementation, the requirements are pretty similar to the integration services and we will require some additional features. We discuss API services and API management in a more broad way in Chapter 10, “APIs, Events, and Streams”.
Since we have a good understanding of the different types of microservices, let’s discuss some of the microservices integration patterns that we can commonly use.
Microservices Integration Patterns
We have found seams in different microservices categories, so now it’s time to see how they can be used in real-world applications. When it comes to integrating microservices, we can identify a couple of integration patterns. Let’s discuss them in detail and see the pros and cons along with when to use those patterns.
Active Composition or Orchestration
The key concept here is that, with an active composition , we create an integration service that’s dependent on some other set of microservices. If you consider the theoretical aspects that we discussed in the first few chapters, this seems like a violation of the microservices principles. But it’s virtually impossible to build anything useful without depending on the other services and systems. What really important here is to understand the boundaries between these services and clearly define the capabilities.
Active compositions are commonly used when we need to control the service integration at a centralized service and when communication between dependent services is synchronous. Once you clearly define the business capability for an integration service, the business logic of it resides in a single service. That makes management and maintenance quite easier.
Note
Synchronous communication doesn’t mean that the implementation is based on a blocking communication model. We can build synchronous communication on top of a fully non-blocking implementation where threads are not blocked on a given request-response interaction. We can leverage non-blocking programming models to implement such synchronous communication patterns.
This approach may be not a best fit if you have asynchronous or event-driven use cases. The dependencies between services could be an issue for certain business use cases. Even if you use non-blocking techniques to implement the synchronous communication, the request is bound to the latency of all the dependent services. For example, if a given integration service is invoked, it is bound to the sum of all the latencies incurred by all the dependent services.
Reactive Composition or Choreography
As discussed in Chapter 3, “Inter-Service Communication,” the communication can be either queue-based (a single consumer) or pub-sub (multiple consumers). Based on your requirements, you can use Kafka, RabbitMQ, or ActiveMQ etc. as the event bus.
Reactive composition makes microservices inherently autonomous. Since we don’t have a service that contains the centralized composition logic, these microservices are not dependent on each other. They only become active when a given event occurs and then it processes the message and completes the work once the result is published to the event bus.
Note
Event stream processing or complex event processing can be considered a more powerful way to process a stream of events. Here we only discussed the event-based messaging. We discuss event stream processing in detail in Chapter 10.
The main tradeoffs of this approach, such as the complexity in communication and not having the business logic at a centralized service, make the system really hard to understand. Since we are using an event bus/message bus, all the services that we write should have the capability to publish and subscribe to the event bus. Also, without having comprehensive observability , across all the services, it’s difficult to understand the interactions and business logic that the reactive compositions implement.
Hybrid of Active and Reactive Composition
Often there are services that you would expose to the consumers as APIs that are fully synchronous. Such API calls will result in several other microservices calls where some invocations can be/should be done using a reactive approach. As discussed in Chapter 3, scenarios such as order placing and processing in a retail business use case are much more elegant when implemented with a reactive style. Hence, you have to pick and choose the style that you want to use, depending on your business use cases.
A hybrid composition is usually more pragmatic for most enterprise microservice integrations.
Anti-Corruption Layer
You can introduce a microservices architecture into an enterprise while some of its subsystems are still based on a monolithic architecture. We can build a façade or an adapter layer between the microservices and monolithic subsystems. In Chapter 2, “Designing Microservices,” we discussed the anti-corruption layer, which allows you to independently develop either the microservices components or existing monolithic applications The applications built on top of these two different architectural styles can interact with each other via the anti-corruption layer. The technologies and standards that are used for the monolithic portion and the microservices portion may be drastically different. That’s why building anti-corruption layer is often required when it comes to building microservices integrations.
For instance, in our hybrid composition use case that we discussed in Figure 7-7, microservice-3 integrates the microservices subsystem with proprietary, legacy, and external web APIs, which are all part of the monolithic subsystem. That service is part of the anti-corruption layer. Typically, the technologies that you would use for building integration microservices can also be used for building services at the anti-corruption layer.
Strangler Façade
In the context of microservices in enterprises, you will often have to deal with the existing non-microservice subsystems. By introducing a microservices architecture, you will be gradually replacing most of the existing subsystems. However, this is not something that will happen overnight. The strangler pattern proposes an approach that helps you incrementally migrate a non-microservice subsystem by gradually replacing specific pieces of functionality with new microservices. If you are introducing microservices to an enterprise, you will build a strangler façade that will selectively route the traffic between the modern and legacy systems. Over time, you will entirely replace the legacy system with the new microservices and will get rid of the strangler layer.
Key Requirements of Integration Services
By now you have a good understanding of the importance of integration microservices and microservice integration patterns. Let’s delve deep into the technologies that we can leverage to implement those patterns. However, before we do that, it will be beneficial to have a clear understanding of the specific requirements for building integration microservices.
Network Communication Abstractions
As we discussed in detail in Chapter 3, inter-microservice communication is absolutely essential to building a microservices-based application. Services are autonomous and the only way they interact and formulate business functionality is through inter-service communication. Therefore, for integration services, we must support different communication patterns such as synchronous and asynchronous communication and the associated network protocols.
In practice, for synchronous communication, RESTful services are heavily in use and native support for RESTful services and HTTP 1.1 is crucial. Also, many service implementation frameworks now leverage HTTP2 as the default communication protocol to benefit from all the new capabilities introduced in HTTP2.
Under the context of synchronous communication, gRPC services are proliferating and most of the microservice implementations use it as the de-facto standard for internal microservices communication. Given that gRPC and protocol buffers2 cater to polyglot microservices implementations, they inherently cater to most of the inter-service communication requirements of microservices built with different languages.
Asynchronous service integrations are primarily built around queue-based communication (single receiver) and technologies such as AMQP are quite commonly used in practice. For pub-sub (event-driven multiple receiver communication), Kafka has become the de-facto standard for inter-service communication.
The things that you developed at the centralized ESB now must be implemented in your integration microservices. Microservices implementation technologies should cater to all of the capabilities that an ESB offers.
In addition to the primitive network communication protocols, microservices often need to integrate with web APIs such as Twitter, Salesforce, Google Docs, PayPal, Twilio, etc. While there are SaaS applications that offer network accessible APIs, most of the integration products such as ESBs have high-level abstractions that allow you to integrate with these systems with minimal effort. Ultimately, the integration microservices implementation technologies need to have a certain set of abstractions to integrate with such web APIs. (For example, libararies or connectors to access web APIs such as Twitter API, PayPal API etc.)
Resiliency Patterns
Now and forever, networks will always be unreliable.—Michael Nygard, “Release It”
Michael Nygard discusses several patterns related to inter-application communication over unreliable network in his book, Release It. In this chapter, we take a closer look at the behavior of those patterns, in order to try to understand them using real-world use cases and look at some of the implementation details.
Timeout
When we are using synchronous inter-service communication, one service (caller) sends a request and waits for a response in a timely manner. Timeout is all about deciding when to stop waiting for a response at the caller service level. When you call another service or a system using a specific protocol (i.e., HTTP), you can specify a timeout, and if that timeout is reached, you can define a specific logic to handle such events.
It’s important to keep in mind that a timeout is an application level thing and shouldn’t be confused with any similar implementations at the protocol level. For instance, when a given integration microservice calls two microservice A and B, the integration microservice can define a timeout value for service A and B separately. Timeouts help services isolate failures. A misbehavior or an anomaly of another system or service does not have to become your service’s problem. When you place a timeout while calling an external service and have specific logic to handle such events, this makes it easier to isolate failures as well as handle failures gracefully.
Circuit Breaker
When there is an invocation failure, a circuit breaker keeps that state and updates the threshold count and, based on the threshold count or frequency of the failure count, it opens the circuit. When the circuit is open, the real invocation of the external service is prevented, and the circuit breaker generates an error and returns immediately.
We can consider that when the reset timeout is reached, the circuit state is changed to the half-open state, where the circuit breaker allows another invocation of the external service for a new request that comes to the microservice. If it succeeds, the circuit breaker will change to the closed state again and to the open state otherwise.
By design, circuit breaker is a mechanism to degrade the performance of a system when it is not performing well or is failing (rather than failing the entire system). It will prevent any further damage to the system or cascading failures. We can tune the circuit breaker with various backoff mechanisms, timeouts, reset intervals, and error codes that it must directly go into open states, or error codes that it should ignore, etc. These behaviors are implemented at different levels with different complexities using various circuit breaker implementations. It is important to keep in mind that the circuit breaker behavior has a closer relationship with the business requirements of a particular microservices based application.
Fail Fast
In the fail fast pattern, the key objective is to detect a failure as quickly as possible. It is built around the concept that a failure response is much better than a slow failure response. Hence, detecting a failure at the early stages is an important factor during inter-service communication. We can detect failures at different stages of the inter-service communication. In certain situations, just by looking at the contents of the request/message, we can decide that this request is not a valid one. In other cases, we can check for the system resources (such as thread pools, connections, socket limits, and databases) and the state of the downstream components of the request lifecycle.
Fail fast, together with timeouts, will help us develop microservices based applications that are stable and responsive.
Bulkhead
Bulkhead is a mechanism to partition your application so that an error that occurs in a partition is localized to that one partition only. It won’t bring the entire system to an unstable state; only that partition will fail. At the core design principles of microservices, the bulkhead pattern is heavily used. When we design microservices, we deliberately group similar operations into a microservice and independent business functionalities are implemented as separate microservices. Hence, microservices are deployed independently on different runtimes (VMs or containers), which means a failure of a given functionality would not affect the other functionalities.
However, for some reason, if you have to implement two or more business functionalities inside a single service, you need to take precautions to partition your service so that failure of a certain set of business operations will not affect the rest of the operations. In general, it is recommended to identify such independent operations and covert them to microservices if possible. However, if you can’t split them into services, there are certain techniques to implements bulkheads within a single service/application. For example, we can have a dedicated resource (such as a thread pool, storage, or database) to handle different partitions of a service.
Load Balancing and Failover
The key ideas behind load balancing and failover are quite simple. Load balancing is used to distribute the load across multiple microservice instances, while failover is used to reroute requests to alternate services if a given service fails. In the conventional middleware implementation such as ESBs, these functionalities are also implemented as part of the service logic. However, with the advancement of containers and container management systems such as Kubernetes, most of these functionalities are now built into the deployment ecosystem itself. Also, most of the cloud infrastructure vendors, such as Amazon Web Services (AWS), Google Cloud, and Azure offer these capabilities as part of their infrastructure as a service (IaaS) offerings. We discuss containers and Kubernetes in detail in Chapter 8, “Deploying and Running Microservices”.
Active or Reactive Composition
As we discussed in the section on microservices integration patterns, building active or reactive service compositions is absolutely vital to any real-world microservices implementation. Therefore, microservice integration technologies should support building active and reactive compositions. At the implementation level this means the ability to invoke services through different protocols, use supporting components such as circuit breakers, and create composite business logic. For active compositions, support for synchronous service invocations (implemented on top of non-blocking threads with callbacks) is quite important. For reactive compositions , support for messaging styles such as pub-sub and queue-based messaging is required along with seamless integration with messaging backbones such as Kafka or RabbitMQ. Also, different message exchange patterns need to be implemented at the service level—the ability to mix and match such patterns is required. For example, the inbound request may be an asynchronous message, while the external (outbound) service invocations are synchronous. Therefore, we should be able to mix and match these message-exchange patterns.
Data Formats
When we are building compositions, we must create compositions out of different data formats. For instance, a given microservice will be exposed via a given data format (for inbound requests), while it invokes other services (outbound), for which use different data formats. When we create compositions of these services, we must do type matching between these data formats and implement our service in a type-safe manner. Therefore, service implementation technologies that we are using should worry about all the different data formats and should provide a convenient way to handle those formats. Data formats such as JSON, Avro, CSV, XML, protocol buffers, are widely used in practice.
Container-Native and DevOps Ready
Microservices development technologies that we use should be cloud-native and container-native. The same applies to integration services. When you are building an integration service, the development technologies that you use have to be cloud- and container-native. When we were discussing the anti-patterns for microservices integration, using ESB for integrating microservices was heavily discouraged. The primary reason behind that is that almost all the ESB technologies are NOT cloud- or container-native.
For a technology to be cloud- or container-native, the runtime should start within a few seconds (or less), the memory footprint, CPU consumption and storage required must be extremely low. Therefore, when selecting a technology for microservice integration, we should consider all these aspects.
In addition to the container-native aspects of the runtime, the integration microservice development technologies have to worry about native integrations with containers and container management systems such as Kubernetes. What this means is how easily you can create a container out of the applications or services that you develop. Having support to configure and create the container-related artifacts with your service development technology will vastly improve the agility of your microservices development process. We cover the details of containers, Docker, and Kubernetes in Chapter 8.
Governance of Integration Services
Figure 7-10 shows that you can clearly see all the interaction between microservices and we can get a clear idea about all the business use cases. However, just think about how these interactions look at the operational level. We will not have this kind of a view if we don’t have a proper observability mechanism in place. As part of the integration service development technology , we need to have a seamless way to integrate existing observability tools to get the metrics, tracing, logging, service visualization, and alerting for your integration services.
Stateless, Stateful, or Long-Running Services
Microservices design favors stateless immutable services and most of the use cases can be realized with the use of such stateless services. However, there are many use cases, such as business processes, workflows, and so on, that require stateful and long-running services. In the context of conventional integration middleware, such requirements are implemented and supported at the ESB or business process solution level. With microservices, these requirements have to built from the ground up, so having native support at the integration microservice level for such capabilities will be useful.
The ability to build workflows, business processes, and distributed transactions with SAGAs (which is discussed in Chapter 5, “Data Management”) is a key requirement of stateful and long-running services.
Technologies for Building Integration Services
From what we have discussed so far in this chapter, it should be clear that there is no silver-bullet technology that we can use to build microservices. There are different types of microservices and each microservice addresses a drastically different set of requirements. Hence, to realize such microservices, we need to leverage polyglot microservice development technologies. In this section, we discuss some of the most commonly used microservices development technologies; ones that are more suitable for building microservice compositions or integration microservices.
There are microservices frameworks that are built on top of generic programming languages such as Java and provide abstractions via different technologies to cater to microservice composition. Integration frameworks, on the other hand, are not solely targeted to build microservices (rather they are built to address the generic enterprise integration requirements), but still they can be used to integrate microservices. Also, there are certain programming languages that cater to such microservice integration needs out-of-the-box.
Let’s take a closer look at some of the microservice development frameworks, integration frameworks, and generic programming languages that are commonly used in practice.
Note
If you find any issues in building or running the examples given in this book, refer to the README file under the corresponding chapter in the Git repository: https://github.com/microservices-for-enterprise/samples.git . We will update the examples and the corresponding README files in the Git repository to reflect any changes related to the tools, libraries, and frameworks used in this book.
Spring Boot
Create stand-alone Spring applications.
Embed Tomcat, Jetty, or Undertow directly (no need to deploy WAR files).
Provide opinionated starter POMs to simplify your Maven configuration.
Automatically configure Spring whenever possible.
Provide production-ready features such as metrics, health checks, and externalized configuration.
Absolutely no code generation and no requirement for XML configuration.
Let’s discuss some of the key features that are essential for integrating microservices offered by Spring Boot.
RESTful Services
You can build composition microservices using Spring Boot’s service development features. For example, you can build a simple RESTful service using Spring Boot as follows.
Since the response format is a POJO, it is explicitly converted to a JSON. If you want to control it, you can do this with @GetMapping(path = "/hello", produces=MediaType.APPLICATION_JSON_VALUE) at the request-mapping level.
You can try this example from our examples in ch07/sample01.
Network Communication Abstractions
Consuming and producing data over the network is supported in Spring Boot via numerous abstractions, as discussed next.
HTTP
Similarly, RestTemplate also supports other HTTP verbs such as POST, PUT, and DELETE. When it comes to create a service that exposes a RESTful service, you can use Spring’s support for embedding the Tomcat servlet container as the HTTP runtime, instead of deploying to an external instance. You can try this example from our examples in ch07/sample02.
JMS
The JmsListener annotation defines the name of the Destination that this method should listen to and the reference to the JmsListenerContainerFactory is used to create the underlying message listener container. Passing a value to the containerFactory attribute is not necessary unless you need to customize the way the container is built, as Spring Boot registers a default factory if necessary.
The JmsTemplate contains many convenience methods to send a message. There are send methods that specify the destination using a javax.jms.Destination object and those that specify the destination using a string for use in a JNDI lookup. You can try this example from our examples in ch07/sample03.
Databases/JDBC
You can try this example from our examples in ch07/sample04. In addition to what we have mentioned, Spring provides the ability to integrate with numerous other network protocols.
Web APIs: Twitter
As you can see, Spring Boot offers one of the most comprehensive set of capabilities to integrate your microservices with other systems and APIs.
Resiliency Patterns
We’ve applied @HystrixCommand to our original readingList() method. We also have a new method here, called reliable(). The @HystrixCommand annotation has reliable as its fallbackMethod, so if, for some reason, Hystrix opens the circuit on readingList(), we’ll have a default result to be shown. You can try this example from our examples in ch07/sample06.
Data Formats
Jackson provides a comprehensive set of data processing capabilities for a diverse set of data types, including the flagship streaming JSON parser/generator library, matching data-binding library (POJOs to and from JSON), and additional data format modules to process data encoded in Avro, BSON, CBOR, CSV, Smile, (Java) Properties, Protobuf, XML or YAML. It even provides a large set of data format modules to support data types of widely used data types such as Guava, Joda, Pcollections, and many, many others. You can try this example from our examples in ch07/sample07.
Observability
You can enable metrics, logging, and distributed tracing for your Spring Boot microservices applications. It requires minimal changes in your microservices application to make your Spring Boot integration microservices observable. When we delve deep into observability concepts in Chapter 13, “Observability,” we discuss these capabilities in detail.
Dropwizard
Dropwizard is another popular microservice development framework. The main objective of Dropwizard is to provide performant, reliable implementations of everything a production-ready web application needs. Because this functionality is extracted into a reusable library, your application remains lean and focused, reducing both time-to-market and maintenance burdens.
Dropwizard uses the Jetty HTTP library to embed a tuned HTTP server directly into your project. Jersey is used as the RESTful web application development engine, while Jackson handles the data formats. There are several other libraries that are bundled with Dropwizard by default. However, unlike Spring Boot, for certain microservice integrations that require integration with multiple network protocols and web APIs, there’s a limited set of features offered from it out-of-the-box.
Apache Camel and Spring Integration
Apache Camel is a conventional integration framework designed to address the centralized integration/ESB needs. The key objectives of the Apache Camel integration framework is to provide an easy-to-use mechanism for implementing Enterprise Integration Patterns-EIPs such as content-based routing, transformations, protocol switching, scatter-gather, etc., in a trivial way with a small footprint and overhead, embeddable in your existing microservices.
Also, Apache Camel offers seamless integration with Spring Boot, which makes a powerful combination to facilitate microservice integration. You can try this example from our examples in ch07/sample08.
Spring Integration is quite similar to Apache Camel and it extends the Spring programming model to support well-known EIPs. Spring Integration enables lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters. Those adapters provide a higher level of abstraction over Spring's support for remoting, messaging, and scheduling. Spring Integration's primary goal is to provide a simple model for building enterprise integration solutions while maintaining the separation of concerns that is essential for producing maintainable, testable code.
If you compare and contrast Camel and Spring Integration, you may find that Spring Integration DSL exposes the lower-level EIPs (e.g., channels, gateways, etc.), whereas the Camel DSL focuses more on the high-level integration abstractions.
With either Camel or Spring Integration , you can build your microservices integration based on a well defined DSL. However, keep in mind that you are constrained by this DSL and you will have to do a lot of tweaks when you are building real programming logic on top of a DSL.
Also, both these DSLs can become pretty clunky in substantially complex integration scenarios. One could argue that for microservices integration , we can completely omit the usage of EIPs and rather implement them as part of the service code from scratch. So, if your use case needs to use most of the existing EIPs and connectors to various systems, then Camel or Spring Integration is a good choice.
Vert.x
Eclipse Vert.x is event driven, non-blocking, reactive and polyglot software development toolkit, which you can use to build microservices and integrate them. Vert.x is not a restrictive framework (an unopinionated toolkit) and it doesn’t force you to write an application a certain way. You can use Vert.x with multiple languages, including Java, JavaScript, Groovy, Ruby, Ceylon, Scala, and Kotlin.
Vet.x integration capacities also include gRPC, Kafka, AMQP-based on RabbitMQ, MQTT, STOMP, authentication and authorization, service discovery, and so on. In addition to functional components, all the ecosystem-related capabilities—such as testing, clustering, DevOps, integrating with Docker, observability with metrics and health checks, etc.—absolutely make Vet.x one of the comprehensive microservices and integration frameworks out there.
Akka
Akka is a set of open source libraries for designing scalable, resilient systems that span processor cores and networks. Akka is fully based on the actor model, which is a mathematical model of concurrent computation that treats actors as the universal primitives of concurrent computation. In response to a message that it receives, an actor can make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state but can only affect each other through messages; it avoids the need for any locks.
Akka aims to provide your microservices a multi-threaded behavior without the use of low-level concurrency constructs like atomics or locks—relieving you from even thinking about memory visibility issues, a transparent remote communication between systems and their components, and a clustered, high-availability architecture that is elastic and scales in or out on demand.
You can leverage the Akka HTTP modules to implement HTTP-based services, and it provides a full server and client-side HTTP stack on top of Akka-actor and Akka-stream. It’s not a web-framework but rather a more general toolkit for providing and consuming HTTP-based services.
On top of the Akka HTTP, Akka provides a DSL to describe HTTP routes and how they should be handled. Each route is composed of one or more levels of directives that narrow down to handling one specific type of request.
For example, one route might start by matching the path of the request only if it finds a match to /order, then narrowing it down only to handle HTTP GET requests and then complete those with a string literal, which will be sent back as an HTTP OK with a string as the response body. The route created using the Route DSL is then bound to a port to start serving HTTP requests. JSON support is possible in Akka-http by using Jackson.
Akka caters to the specific integration requirements of microservices and other types of integrations via Alpakka initiative. Alpakka enables Akka-based integration of various Akka Streams connectors, integration patterns, and data transformations for integration use cases. Alpakka provides numerous connectors, such as HTTP, Kafka, File, AMQP, JMS, CSV, web APIs (AWS S3, GCP pub-sub, and Slack), MongoDB, etc.
Here, AmqpSink is a collection of factory methods that facilitates creation of sinks and sources allowing you to fetch messages from AMQP.
Node, Go, Rust, and Python
Node.js is an open source, cross-platform JavaScript runtime environment that executes JavaScript code server-side. Node.js supports building RESTful services out-of-the-box and you can build your service on a full non-blocking I/O model, which leverages the event loop. (When Node.js starts, it initializes the event loop, processes the provided input script—which may make async API calls, schedules timers, or calls process.nextTick()—and then begins processing the event loop.)
In addition to the standard set of features, Node.js has a diverse ecosystem that allows you to integrate a microservice based on Node.js with almost any of the other network protocols, databases, web APIs, and other systems.
There are multiple frameworks built on top of Node.js, such as Restify, which is a web services framework optimized for building semantically correct RESTful web services ready for production use at scale. Similarly, there are numerous libraries and packages available for Node.js on NPM (NPM is a package manager for Node.js packages.). For example, you can find NPM packages for Kafka integration (kafka-node), AMQP (node-amqp), circuit breaker, and instrumentation libraries for most of the popular observability tools.
Go4 is also quite commonly used for microservices development and offers a rich set of packages for network communication.
Similarly, other programing languages, such as Rust and Python, offer quite a few out-of-the-box capabilities to build microservices and integrate microservices. For Rust, we have Rocket, a web framework that makes it simple to write fast web applications without sacrificing flexibility or type safety. The Rust ecosystem components address most of the integration of Rust applications with other network protocols, data, web APIs, and other systems. However, some developers claim that Rust is too low level to be a microservices development lanauge. So, we recommend you to give it a try with some use cases prior to fully adopting it.
Similarly, Python has a broad community of production ready frameworks, such as Flask, for microservice development and integration.
Ballerina
Ballerina5 is an emerging integration technology which is built as a programming language, and it aims to fill the gap between integration products and general-purpose programming languages by making it easy to write programs that integrate and orchestrate across distributed microservices and endpoints in a type-safe and resilient manner.
At the time this book was written, Ballerina is in 0.981 version. Most of the programming constructs are final but some are subjected to change and the language is yet to be widely adopted across the microservices communities.
Disclaimer
The authors of this book have contributed to the design and development of Ballerina. As we thrive to keep the contents of this book technology and vendor neutral, we will not compare and contrast Ballerina with other similar technologies. We highly encourage the readers to select the most appropriate technology after doing a thorough evaluation of their use cases and the potential technologies to realize those use cases.
Both the code and the graphical syntax in Ballerina are inspired from how the independent parties communicate via interactions in a sequence diagram.
In the code, the remote endpoints are interfaced via endpoints that offer type-safe actions and worker’s logic is written as sequential code inside a resource or a function. You can define a service and bind it to a server endpoint (for example, an HTTP server endpoint can listen on a given HTTP port). Each service contains one or more resources that we write to the sequential code related to a worker and run by a dedicated worker thread.
Network-Aware Abstractions
Resilient and Safe Integration
The integration microservices you write using Ballerina are inherently resilient. You can make invocations of an external endpoint in a resilient and type-safe manner.
By design, Ballerina code you write will not require specific tools for checking for vulnerabilities or best practices. For example, a common issue in building distributed systems is that data coming over the wire cannot be trusted not to include injection attacks. Ballerina assumes that all data coming over the wire is tainted. And compilation-time checks prevent code that requires untainted data from accessing tainted data. Ballerina offers such capabilities as the built-in construct of the language, so that the programmer is enforced to write secure code.
Data Formats
Ballerina has a structural type system with primitive, record, object, tuple, and union types. This type-safe model incorporates type inference at assignment and provides numerous compile-time integrity checks for connectors, logic, and network-bound payloads.
The code that integrates services and systems often has to deal with complex distributed errors. Ballerina has error-handling capabilities based on union types. Union types explicitly capture the semantics without requiring developers to create unnecessary wrapper types. When you decide to pass the errors back to the caller, you can use the check operator.
Observablity
Monitoring, logging, and distributed tracing are key methods that reveal the internal state of the Ballerina code to provide the observability. Ballerina provides out-of-the-box capabilities to work with observability tools, such as Prometheus, Grafana, Jeager, and Elastic Stack, all with minimal configuration.
Workflow Engine Solutions
We conclude our discussion of microservice integration technologies with technologies that are specifically designed for microservices that require workflows (i.e., long-running stateful processes that may also need some human interactions). Building workflows in a microservices architecture is a special case of integration requirements. There are quite a few new and existing solutions morphing into the microservices workflow domain. Zeebe, Netflix Conductor, Apache Nifi, AWS Step Functions, Spring Cloud Data Flow, and Microsoft Logic Apps are good examples.
Zeebe6 supports the stateful orchestration of workers and microservices using visual workflows (developed by Camunda, which is a popular open source Business Process Model and Notation—BPMN—solution). It allows users to define orchestration flows visually using BPMN 2.0 or YAML. Zeebe ensures that, once started, flows are always carried out fully, retrying steps in case of failures. Along the way, Zeebe maintains a complete audit log, so that the progress of flows can be monitored and tracked. Zeebe is a Big Data system and scales seamlessly with growing transaction volumes. (You can try a Zeebe example from our examples in ch07/sample17.)
Netflix Conductor7 is an open source workflow engine that uses a JSON DSL to define a workflow. Conductor allows creating complex process/business flows in which an individual task is implemented by a microservice.
Apache NiFi8 is a conventional integration framework that supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.
Inception of the Service Mesh
What we have seen so far in this chapter is that microservices have to work with other microservices, data, web APIs, and other systems. Since we don’t use a centralized ESB as a bus to connect all these services and systems, the inter-services communication is now part of the service developer. Although many microservices frameworks address most of such needs, it’s still a daunting task for the service developer to take care of all the requirements of integrating microservices.
To overcome this problem, architects identify that some of the inter-service communication functionalities can be treated as commodity features and the service code can be made independent from them. The core concept of a Service Mesh is to identify the commodity network communication functionalities such as circuit breaker, timeouts, basic routing, service discovery, secure communication, observability, etc. and implement them at a component called a sidecar, which you run alongside the microservices you develop. These sidecars are controlled by a central management called a control plane.
With the advent of the Service Mesh, the microservice developers get more freedom when it comes to polyglot service development technologies and they focus less on the inter-service communication. They can focus more on the business capabilities of a service that they develop.
We discuss the Service Mesh in detail in Chapter 9, “Service Mesh,” and delve deep into some of the existing service mesh implementations.
Summary
In this chapter, we took an in-depth analysis of the microservices integration challenges. With the omission of ESB, we need to practice the smart endpoints and dumb client philosophy when we develop services. With this approach, most of the ESB capabilities now need to be supported at the microservices that we develop. We identified some of the commonly used microservice integration patterns: active compositions/orchestrations, reactive compositions/choreography, and a hybrid approach. It is more pragmatic to stick to a hybrid approach of integrating microservices and select the integration patterns based on your use cases.
In order to facilitate microservices integration, the microservices development frameworks need to have a unique set of capabilities such as built-in network communication abstractions, support for resiliency patterns, native support to data types, ability to govern integration microservices, cloud and container native nature, etc. With respect to those requirements, we had a detailed discussion of some of the key microservice implementation technologies. With the inception of the Service Mesh, some of the microservice integration requirement can be offloaded to a distributed network communication abstraction, which is executed as a sidecar alongside each service and controlled by a centralized control plane. In the next chapter, we discuss how to deploy and run microservices.