© Kasun Indrasiri and Prabath Siriwardena 2018
Kasun Indrasiri and Prabath SiriwardenaMicroservices for the Enterprisehttps://doi.org/10.1007/978-1-4842-3858-5_7

7. Integrating Microservices

Kasun Indrasiri1  and Prabath Siriwardena1
(1)
San Jose, CA, USA
 

The microservices architecture fosters building a software application as a suite of independent services. When we have to realize a given business use case, we often need to have the communication and coordination between multiple microservices. Therefore, integrating microservices and building inter-service communication has become the one of the most challenging tasks needed to realize the microservices architecture.

In most of the existing books and other resources, the concepts of microservice integration is barely discussed or is explained in a very abstract manner. Therefore, in this chapter, we delve deep into the key challenges of integrating microservices, patterns for integrating microservices, and frameworks and languages that we can leverage.

Why We Have to Integrate Microservices

Microservices are designed to address specific fine-grained business functionality. Therefore, when you are building a software application using microservices, you have to build a communication structure between these services. As discussed in Chapter 1, “The Case for Microservices,” when you are using SOA, you build a set of services (web services) and integrate them using a central bus known as an Enterprise Service Bus (ESB).

As shown in Figure 7-1, with the ESB-based approach, you build business logic as part of this centralized bus as well as various network communication functions such as resilient communication (circuit breaker and timeouts) and quality of service aspects (we discuss these capabilities in detail in the latter part of the chapter).
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig1_HTML.jpg
Figure 7-1

Integrating services with ESB in SOA

When you do service integration using an ESB, you will have a set of virtual services that are tightly coupled into the centralized ESB runtime. As the ESB layer grows with business and network communication logic, it becomes one gigantic monolithic application in most enterprises. This means it has all the limitations of a monolithic application that we discussed in Chapter 1.

When you move into microservices architecture, you will still need to integrate your microservices to build any meaningful business use cases. There are several important requirements in a microservices architecture that makes the microservice integration quite critical.
  • Microservice composition: Creating a composite service out of the existing microservices and exposing that as a business functionality to the consumers is one of the most common use cases in microservices architecture. The composition can be built using synchronous communication (active) or using asynchronous (reactive) communication patterns.

  • Building resilient inter-service communication: All microservice calls take place on the network and are prone to failures. Therefore, we need to implement the stability and resiliency patterns when we make inter-service calls.

  • Granular services and APIs: Most microservices are too fine-grained to be published as a business functionality/API for the consumers.

  • Microservices in brownfield enterprises: Microservices in enterprise applications need integration between existing legacy systems, proprietary systems (e.g., ERP systems), databases, and web APIs (e.g., Salesforce).

A microservices architecture favors an alternative approach to using a centralized ESB, which is known as smart endpoints and dumb pipes. All the requirements that we discussed above needs to be implement for microservices too. Let’s have a closer look at the concept of smart endpoints and dumb pipes.

Smart Endpoints and Dumb Pipes

With the smart endpoints and dumb pipes approach, when we have to integrate microservices, we shouldn’t be using a centralized monolithic ESB architecture but a fully decentralized approach of services communicating via a dumb messaging infrastructure. All the smarts live in the endpoints (services and consumers) and intermediate-messaging channels have no business or network communication logic. Therefore, as shown in Figure 7-2, integration of microservices is taken care by some other set of microservices. They are responsible for the integration logic as well as the network communication to invoke those services. These services may be built using different technologies and, unlike ESB, each integration service is autonomous.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig2_HTML.jpg
Figure 7-2

Smart endpoints and dumb pipes : All smarts live in the endpoint (service) level while they communicate via a dumb messaging infrastructure

Although this approach looks far more elegant than the conventional centralized ESB, there are several complexities that the developers have to deal with. First and foremost, we should clearly understand that this approach doesn’t remove any of the complexities in business or network communication in the ESB approach. Which means that you need to implement all the capabilities that your service integration logic requires, as part of your service. For instance, microservice-1 should contain the composition of multiple data types from the Order Processing and Shopping Cart microservices and resilient communication to invoke those services (such as circuit breakers, failovers, etc.). It must also include any other cross-cutting capabilities that you will require (such as security and observability). Also, if you are using polyglot microservices technologies, it’s more likely that you have to repeat the same implementation for commodity features such as resilient communication using multiple technologies.

It’s crucial to take these requirements of microservice integration into consideration when picking technologies for implementation. We’ll dive into the specifics of the requirements and the technologies that fit those requirements in the latter half of this chapter. But before that, it’s important to discuss some of the common anti-patterns related to microservices integration that you should avoid.

Anti-Patterns of Microservice Integration

There are several anti-patterns in integrating microservices that we should be aware of. Most of these patterns have emerged because of the complexity of integrating microservices and trying to replicate the same set of features that a centralized ESB offers with the microservices architecture.

Monolithic API Gateway for Integrating Microservices

One common anti-pattern is to use an API gateway as the service integration (or composition) layer to expose business services to the consumers. For example, suppose you have developed several microservices and the business functionality that you want to expose needs to have some collaboration (or orchestration) between multiple services. What you need to build is a composite microservice that talks to a couple of downstream services and exposes the composite functionality. In many microservices implementations, we developed the integration logic as part of the API gateway, which is more or less a monolithic component. There are many real-world examples from the existing microservices implementations. For example, Figure 7-3 illustrates how the Netflix API gateway1 was initially implemented.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig3_HTML.jpg
Figure 7-3

Netflix API gateway : service integration is done at the API gateway level and multiple APIs are part of the monolithic API gateway layer

Netflix is probably the most popular and successful microservices implementation out there. Netflix exposes their internal services through the Netflix API layer. They explain the functionality of the Netflix API as follows:

The Netflix API is the “front door” to the Netflix ecosystem of microservices. As requests come from devices, the API provides the logic of composing calls to all services that are required to construct a response. It gathers whatever information it needs from the backend services, in whatever order needed, formats and filters the data as necessary, and returns the response. So, at its core, the Netflix API is an orchestration service that exposes coarse-grained APIs by composing fined grained functionality provided by the microservices.

You can clearly observe that the orchestration layer , which is a monolithic component, contains a significant portion of the business logic in this scenario. This leads to numerous trade-offs that are associated with monolithic applications that we discussed in earlier chapters (such as no failure isolation, can’t scale independently, ownership issues, etc.).

Netflix has identified the drawbacks to the approach and introduced a new architecture for the very same scenario with a segregated API gateway layer, which is no longer monolithic. As shown in Figure 7-4, at the API gateway layer, each composition service is implemented as an independent entity.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig4_HTML.jpg
Figure 7-4

Netflix API gateway with independent APIs that integrate microservices

This approach is pretty much the same as introducing an integration service that is not part of a monolithic runtime and enforces the API gateway related functionality as part of the service runtime. They also tried another alternative of keeping the API gateway as dumb as possible and introducing a composite service where API gateway simply acts as a pass-through runtime.

The key takeaway from this use case is that you shouldn’t be using an API gateway as a monolithic runtime to put the business logic. The service integration or composition logic must be part of another microservice (either at the API gateway layer or at the services layer).

Integrating Microservice with an ESB

There are some microservices implementations that bring in ESB back to a microservices architecture, by using ESB as a runtime to implement the service integration. In most cases, ESB is deployed in a container to serve the service integration of a specific use case. However, ESB has inherent limitations, such as too bulky to be run as a container, not so developer friendly because of the configuration-based integration, etc. In fact, there are some ESB vendors who try to promote this pattern, but this is something that you should avoid while integrating microservices. (There are also container friendly and lightweight versions of ESBs which can be used to independently integrate microservices, which is far better than using a central ESB.)

Using Homogeneous Technologies to Build all Your Microservices

We’ve discussed earlier that smart endpoints and dumb pipes literally means that all the cool features that we get out-of-the-box with ESBs now have to be implemented as part of our service logic. When we develop microservices, we need to consider that not all microservices are similar. There are certain services that will focus more on the business logic and computations, while some services are more about inter-service communications and network calls. If we stick to a single homogenous set of technologies to build all these microservices, and then we will have to put more effort into building the core components for integrating microservices than focusing on the business logic of the service. For example, service integration often requires service discover and resilient communication (such as circuit breakers). Some frameworks or programming languages offer these capabilities out-of-the-box while some don’t. Therefore, your architecture should be flexible enough to pick the right technology for the job.

Organizing Microservices

Identifying different types of microservices based on their interactions and using the most appropriate technologies to build them is the key to building a successful microservices architecture. If we take a closer look at the microservices implementation, we can identify different types of services that we can categorize into a few different categories. Based on service functionalities and granularities, we can identify the following service categories.

Core Services

There are microservices that are fine-grained, self-contained (with no external service dependencies) and mostly consist of the business logic with little or no network communication logic. Given that these services do not have significant network communication functionalities, you are free to select any service implementation technology that can fulfill the service’s business logic. Also, these services may have their own private databases that are leveraged to build the business logic. Such microservices can be categorized as core or atomic microservices.

Integration Services

Core microservices often cannot be directly mapped to business functionalities, as they are too fine-grained. And any realistic business capability would require the interactions or composition of multiple microservices. Such interactions or compositions are implemented as an integration service or a composite service. These services often have to support a significant portion of ESB functionalities such as routing, transformations, orchestration, resiliency and stability patterns etc., at the service level itself.

Integration services serve a composite business functionality, are independent from each other, and contain business logic (routing, what services to call, how to do data type mapping, etc.) and network communication logic (inter-service communication through various protocols and resiliency behaviors such as circuit breakers). Also, they may or may not have private databases associated with the business functionality of the service. These services can bridge the other legacy and proprietary systems (e.g., ERP systems), external web APIs (e.g., Salesforce), shared databases, etc. (often known as the anti-corruption layer).

It’s very important to select the appropriate service development technology for building integration microservices. Since network communication is a critical part of integration services, you should select the most suitable technology for implementing these services. In the latter part of this chapter, we discuss the technologies and frameworks that are suitable for building these services.

API Services

You will expose a selected set of your composite services or even some core services as managed APIs using API services or edge services. These services are a special type of integration services, which apply basic routing capabilities, versioning of APIs, API security, throttling, monetization, API compositions, etc.

In most of these microservices implementations, API services are implemented as part of a monolithic API gateway runtime, and this violates the core microservices architectural concepts. However, most of the API gateway solutions are now moving toward a micro-gateway capability in which you can deploy your API services on an independent and lightweight runtime, while you manage them centrally. When it comes to implementation, the requirements are pretty similar to the integration services and we will require some additional features. We discuss API services and API management in a more broad way in Chapter 10, “APIs, Events, and Streams”.

Since we have a good understanding of the different types of microservices, let’s discuss some of the microservices integration patterns that we can commonly use.

Microservices Integration Patterns

We have found seams in different microservices categories, so now it’s time to see how they can be used in real-world applications. When it comes to integrating microservices, we can identify a couple of integration patterns. Let’s discuss them in detail and see the pros and cons along with when to use those patterns.

Active Composition or Orchestration

Microservice integration can be implemented in such a way that a given (integration) microservice actively calls several other services (can be core or composite service). The business logic and the network communication are built as part of the integration service. The integration microservice should formulate business functionality out of the composition that it does. For example, as illustrated in Figure 7-5, microservice-1 calls microservice-4 and microservice-5 synchronously. The business capability that microservice-1 offers is a composition of the capabilities of microservice-4 and microservice-5. Also, the integration service that we develop can be exposed as an API via the API gateway layer.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig5_HTML.jpg
Figure 7-5

Active composition of microservices : a given integration microservice calls multiple microservices and formulates a business capability

The key concept here is that, with an active composition , we create an integration service that’s dependent on some other set of microservices. If you consider the theoretical aspects that we discussed in the first few chapters, this seems like a violation of the microservices principles. But it’s virtually impossible to build anything useful without depending on the other services and systems. What really important here is to understand the boundaries between these services and clearly define the capabilities.

Active compositions are commonly used when we need to control the service integration at a centralized service and when communication between dependent services is synchronous. Once you clearly define the business capability for an integration service, the business logic of it resides in a single service. That makes management and maintenance quite easier.

Note

Synchronous communication doesn’t mean that the implementation is based on a blocking communication model. We can build synchronous communication on top of a fully non-blocking implementation where threads are not blocked on a given request-response interaction. We can leverage non-blocking programming models to implement such synchronous communication patterns.

This approach may be not a best fit if you have asynchronous or event-driven use cases. The dependencies between services could be an issue for certain business use cases. Even if you use non-blocking techniques to implement the synchronous communication, the request is bound to the latency of all the dependent services. For example, if a given integration service is invoked, it is bound to the sum of all the latencies incurred by all the dependent services.

Reactive Composition or Choreography

With the reactive communication style, we don’t have a service that synchronously calls other services. Instead, all the interactions between services are implemented using the asynchronous event-driven communication style. For example, as shown in Figure 7-6, the communication between the microservices and the consumer application is done via event-driven asynchronous messaging. Therefore, we need to use an event bus as the messaging backbone. The event bus is a dumb messaging infrastructure and all the logic resides at the service level.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig6_HTML.jpg
Figure 7-6

Reactive compositions with asynchronous event-driven communication

As discussed in Chapter 3, “Inter-Service Communication,” the communication can be either queue-based (a single consumer) or pub-sub (multiple consumers). Based on your requirements, you can use Kafka, RabbitMQ, or ActiveMQ etc. as the event bus.

Reactive composition makes microservices inherently autonomous. Since we don’t have a service that contains the centralized composition logic, these microservices are not dependent on each other. They only become active when a given event occurs and then it processes the message and completes the work once the result is published to the event bus.

Note

Event stream processing or complex event processing can be considered a more powerful way to process a stream of events. Here we only discussed the event-based messaging. We discuss event stream processing in detail in Chapter 10.

The main tradeoffs of this approach, such as the complexity in communication and not having the business logic at a centralized service, make the system really hard to understand. Since we are using an event bus/message bus, all the services that we write should have the capability to publish and subscribe to the event bus. Also, without having comprehensive observability , across all the services, it’s difficult to understand the interactions and business logic that the reactive compositions implement.

Hybrid of Active and Reactive Composition

Both active and reactive composition styles have their own pros and cons. What we have seen with most pragmatic microservices implementation is that there are certain scenarios in which active composition is the best fit, while there are some other scenarios in which reactive composition is essential. Our recommendation is to use a hybrid of these approaches by looking at your microservices integration use cases. For instance, Figure 7-7 illustrates a hybrid of these two styles.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig7_HTML.jpg
Figure 7-7

Hybrid of an active and reactive microservices integration

Often there are services that you would expose to the consumers as APIs that are fully synchronous. Such API calls will result in several other microservices calls where some invocations can be/should be done using a reactive approach. As discussed in Chapter 3, scenarios such as order placing and processing in a retail business use case are much more elegant when implemented with a reactive style. Hence, you have to pick and choose the style that you want to use, depending on your business use cases.

A hybrid composition is usually more pragmatic for most enterprise microservice integrations.

Anti-Corruption Layer

You can introduce a microservices architecture into an enterprise while some of its subsystems are still based on a monolithic architecture. We can build a façade or an adapter layer between the microservices and monolithic subsystems. In Chapter 2, “Designing Microservices,” we discussed the anti-corruption layer, which allows you to independently develop either the microservices components or existing monolithic applications The applications built on top of these two different architectural styles can interact with each other via the anti-corruption layer. The technologies and standards that are used for the monolithic portion and the microservices portion may be drastically different. That’s why building anti-corruption layer is often required when it comes to building microservices integrations.

For instance, in our hybrid composition use case that we discussed in Figure 7-7, microservice-3 integrates the microservices subsystem with proprietary, legacy, and external web APIs, which are all part of the monolithic subsystem. That service is part of the anti-corruption layer. Typically, the technologies that you would use for building integration microservices can also be used for building services at the anti-corruption layer.

Strangler Façade

In the context of microservices in enterprises, you will often have to deal with the existing non-microservice subsystems. By introducing a microservices architecture, you will be gradually replacing most of the existing subsystems. However, this is not something that will happen overnight. The strangler pattern proposes an approach that helps you incrementally migrate a non-microservice subsystem by gradually replacing specific pieces of functionality with new microservices. If you are introducing microservices to an enterprise, you will build a strangler façade that will selectively route the traffic between the modern and legacy systems. Over time, you will entirely replace the legacy system with the new microservices and will get rid of the strangler layer.

Key Requirements of Integration Services

By now you have a good understanding of the importance of integration microservices and microservice integration patterns. Let’s delve deep into the technologies that we can leverage to implement those patterns. However, before we do that, it will be beneficial to have a clear understanding of the specific requirements for building integration microservices.

Network Communication Abstractions

As we discussed in detail in Chapter 3, inter-microservice communication is absolutely essential to building a microservices-based application. Services are autonomous and the only way they interact and formulate business functionality is through inter-service communication. Therefore, for integration services, we must support different communication patterns such as synchronous and asynchronous communication and the associated network protocols.

In practice, for synchronous communication, RESTful services are heavily in use and native support for RESTful services and HTTP 1.1 is crucial. Also, many service implementation frameworks now leverage HTTP2 as the default communication protocol to benefit from all the new capabilities introduced in HTTP2.

Under the context of synchronous communication, gRPC services are proliferating and most of the microservice implementations use it as the de-facto standard for internal microservices communication. Given that gRPC and protocol buffers2 cater to polyglot microservices implementations, they inherently cater to most of the inter-service communication requirements of microservices built with different languages.

Asynchronous service integrations are primarily built around queue-based communication (single receiver) and technologies such as AMQP are quite commonly used in practice. For pub-sub (event-driven multiple receiver communication), Kafka has become the de-facto standard for inter-service communication.

What we have discussed so far covers the standards and the latest technologies that are commonly used for inter-microservice communication. How do we communicate with legacy or proprietary systems in our enterprise microservices ecosystem? In fact, microservices implementation technologies must cater to those legacy and proprietary integration use cases too. For instance, if you are using an ERP system in your enterprise, you can’t build a useful application without interacting with it. Hence, if required, the microservice must be able to communicate with such legacy or proprietary systems. This leads us to think that microservices implementation technologies should be able to handle any of the network communication protocols that an ESB supports.

The things that you developed at the centralized ESB now must be implemented in your integration microservices. Microservices implementation technologies should cater to all of the capabilities that an ESB offers.

In addition to the primitive network communication protocols, microservices often need to integrate with web APIs such as Twitter, Salesforce, Google Docs, PayPal, Twilio, etc. While there are SaaS applications that offer network accessible APIs, most of the integration products such as ESBs have high-level abstractions that allow you to integrate with these systems with minimal effort. Ultimately, the integration microservices implementation technologies need to have a certain set of abstractions to integrate with such web APIs. (For example, libararies or connectors to access web APIs such as Twitter API, PayPal API etc.)

Resiliency Patterns

As discussed in Chapter 2, one of the key fallacies of distributed computing is that the network be reliable. Inter-microservice communication or integration microservices must always worry about microservices communication over unreliable networks.

Now and forever, networks will always be unreliable.—Michael Nygard, “Release It”

Michael Nygard discusses several patterns related to inter-application communication over unreliable network in his book, Release It. In this chapter, we take a closer look at the behavior of those patterns, in order to try to understand them using real-world use cases and look at some of the implementation details.

Timeout

When we are using synchronous inter-service communication, one service (caller) sends a request and waits for a response in a timely manner. Timeout is all about deciding when to stop waiting for a response at the caller service level. When you call another service or a system using a specific protocol (i.e., HTTP), you can specify a timeout, and if that timeout is reached, you can define a specific logic to handle such events.

It’s important to keep in mind that a timeout is an application level thing and shouldn’t be confused with any similar implementations at the protocol level. For instance, when a given integration microservice calls two microservice A and B, the integration microservice can define a timeout value for service A and B separately. Timeouts help services isolate failures. A misbehavior or an anomaly of another system or service does not have to become your service’s problem. When you place a timeout while calling an external service and have specific logic to handle such events, this makes it easier to isolate failures as well as handle failures gracefully.

Circuit Breaker

When you are invoking external services or systems, they may fail due to various errors. In such cases you might want to wrap that invocation with an object that monitors and prevents further damage to the system. Circuit breakers are such wrapper objects and we can use them when invoking external services and systems. The main idea behind using a circuit breaker is, if the service invocation fails and reaches a certain threshold, then the circuit breaker wrapper prevents any further invocation of the external service. Rather it immediately returns from the circuit breaker with an error. Figure 7-8 shows the behavior of the circuit breaker when it is in the closed and open states.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig8_HTML.jpg
Figure 7-8

Behavior of circuit breaker. When the circuit is closed, the circuit breaker wrapper object allows the service calls to go through to the external service. When the circuit is open, it prevents the invocations of the external service and returns immediately.

When there is an invocation failure, a circuit breaker keeps that state and updates the threshold count and, based on the threshold count or frequency of the failure count, it opens the circuit. When the circuit is open, the real invocation of the external service is prevented, and the circuit breaker generates an error and returns immediately.

When the circuit is in an open state for a certain period of time, we can apply a self-resetting behavior by trying the service invocation again (for a new request) after a suitable interval and resetting the breaker should it succeed. This time interval is known as circuit reset timeout. With this behavior, we can identify three different states in the circuit breaker, which are depicted in Figure 7-9.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig9_HTML.jpg
Figure 7-9

Circuit breaker’s states

We can consider that when the reset timeout is reached, the circuit state is changed to the half-open state, where the circuit breaker allows another invocation of the external service for a new request that comes to the microservice. If it succeeds, the circuit breaker will change to the closed state again and to the open state otherwise.

By design, circuit breaker is a mechanism to degrade the performance of a system when it is not performing well or is failing (rather than failing the entire system). It will prevent any further damage to the system or cascading failures. We can tune the circuit breaker with various backoff mechanisms, timeouts, reset intervals, and error codes that it must directly go into open states, or error codes that it should ignore, etc. These behaviors are implemented at different levels with different complexities using various circuit breaker implementations. It is important to keep in mind that the circuit breaker behavior has a closer relationship with the business requirements of a particular microservices based application.

Fail Fast

In the fail fast pattern, the key objective is to detect a failure as quickly as possible. It is built around the concept that a failure response is much better than a slow failure response. Hence, detecting a failure at the early stages is an important factor during inter-service communication. We can detect failures at different stages of the inter-service communication. In certain situations, just by looking at the contents of the request/message, we can decide that this request is not a valid one. In other cases, we can check for the system resources (such as thread pools, connections, socket limits, and databases) and the state of the downstream components of the request lifecycle.

Fail fast, together with timeouts, will help us develop microservices based applications that are stable and responsive.

Bulkhead

Bulkhead is a mechanism to partition your application so that an error that occurs in a partition is localized to that one partition only. It won’t bring the entire system to an unstable state; only that partition will fail. At the core design principles of microservices, the bulkhead pattern is heavily used. When we design microservices, we deliberately group similar operations into a microservice and independent business functionalities are implemented as separate microservices. Hence, microservices are deployed independently on different runtimes (VMs or containers), which means a failure of a given functionality would not affect the other functionalities.

However, for some reason, if you have to implement two or more business functionalities inside a single service, you need to take precautions to partition your service so that failure of a certain set of business operations will not affect the rest of the operations. In general, it is recommended to identify such independent operations and covert them to microservices if possible. However, if you can’t split them into services, there are certain techniques to implements bulkheads within a single service/application. For example, we can have a dedicated resource (such as a thread pool, storage, or database) to handle different partitions of a service.

Load Balancing and Failover

The key ideas behind load balancing and failover are quite simple. Load balancing is used to distribute the load across multiple microservice instances, while failover is used to reroute requests to alternate services if a given service fails. In the conventional middleware implementation such as ESBs, these functionalities are also implemented as part of the service logic. However, with the advancement of containers and container management systems such as Kubernetes, most of these functionalities are now built into the deployment ecosystem itself. Also, most of the cloud infrastructure vendors, such as Amazon Web Services (AWS), Google Cloud, and Azure offer these capabilities as part of their infrastructure as a service (IaaS) offerings. We discuss containers and Kubernetes in detail in Chapter 8, “Deploying and Running Microservices”.

Active or Reactive Composition

As we discussed in the section on microservices integration patterns, building active or reactive service compositions is absolutely vital to any real-world microservices implementation. Therefore, microservice integration technologies should support building active and reactive compositions. At the implementation level this means the ability to invoke services through different protocols, use supporting components such as circuit breakers, and create composite business logic. For active compositions, support for synchronous service invocations (implemented on top of non-blocking threads with callbacks) is quite important. For reactive compositions , support for messaging styles such as pub-sub and queue-based messaging is required along with seamless integration with messaging backbones such as Kafka or RabbitMQ. Also, different message exchange patterns need to be implemented at the service level—the ability to mix and match such patterns is required. For example, the inbound request may be an asynchronous message, while the external (outbound) service invocations are synchronous. Therefore, we should be able to mix and match these message-exchange patterns.

Data Formats

When we are building compositions, we must create compositions out of different data formats. For instance, a given microservice will be exposed via a given data format (for inbound requests), while it invokes other services (outbound), for which use different data formats. When we create compositions of these services, we must do type matching between these data formats and implement our service in a type-safe manner. Therefore, service implementation technologies that we are using should worry about all the different data formats and should provide a convenient way to handle those formats. Data formats such as JSON, Avro, CSV, XML, protocol buffers, are widely used in practice.

Container-Native and DevOps Ready

Microservices development technologies that we use should be cloud-native and container-native. The same applies to integration services. When you are building an integration service, the development technologies that you use have to be cloud- and container-native. When we were discussing the anti-patterns for microservices integration, using ESB for integrating microservices was heavily discouraged. The primary reason behind that is that almost all the ESB technologies are NOT cloud- or container-native.

For a technology to be cloud- or container-native, the runtime should start within a few seconds (or less), the memory footprint, CPU consumption and storage required must be extremely low. Therefore, when selecting a technology for microservice integration, we should consider all these aspects.

In addition to the container-native aspects of the runtime, the integration microservice development technologies have to worry about native integrations with containers and container management systems such as Kubernetes. What this means is how easily you can create a container out of the applications or services that you develop. Having support to configure and create the container-related artifacts with your service development technology will vastly improve the agility of your microservices development process. We cover the details of containers, Docker, and Kubernetes in Chapter 8.

Governance of Integration Services

We covered governance aspects of microservices in Chapter 6, “Microservices Governance”. Some governance aspects, such as observability, are extremely crucial when we build microservice integrations. For instance, let’s revisit the hybrid composition scenario that we discussed earlier (see Figure 7-10).
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig10_HTML.jpg
Figure 7-10

Observability is mandatory with complex microservice integrations

Figure 7-10 shows that you can clearly see all the interaction between microservices and we can get a clear idea about all the business use cases. However, just think about how these interactions look at the operational level. We will not have this kind of a view if we don’t have a proper observability mechanism in place. As part of the integration service development technology , we need to have a seamless way to integrate existing observability tools to get the metrics, tracing, logging, service visualization, and alerting for your integration services.

Stateless, Stateful, or Long-Running Services

Microservices design favors stateless immutable services and most of the use cases can be realized with the use of such stateless services. However, there are many use cases, such as business processes, workflows, and so on, that require stateful and long-running services. In the context of conventional integration middleware, such requirements are implemented and supported at the ESB or business process solution level. With microservices, these requirements have to built from the ground up, so having native support at the integration microservice level for such capabilities will be useful.

The ability to build workflows, business processes, and distributed transactions with SAGAs (which is discussed in Chapter 5, “Data Management”) is a key requirement of stateful and long-running services.

Technologies for Building Integration Services

From what we have discussed so far in this chapter, it should be clear that there is no silver-bullet technology that we can use to build microservices. There are different types of microservices and each microservice addresses a drastically different set of requirements. Hence, to realize such microservices, we need to leverage polyglot microservice development technologies. In this section, we discuss some of the most commonly used microservices development technologies; ones that are more suitable for building microservice compositions or integration microservices.

There are microservices frameworks that are built on top of generic programming languages such as Java and provide abstractions via different technologies to cater to microservice composition. Integration frameworks, on the other hand, are not solely targeted to build microservices (rather they are built to address the generic enterprise integration requirements), but still they can be used to integrate microservices. Also, there are certain programming languages that cater to such microservice integration needs out-of-the-box.

Let’s take a closer look at some of the microservice development frameworks, integration frameworks, and generic programming languages that are commonly used in practice.

Note

If you find any issues in building or running the examples given in this book, refer to the README file under the corresponding chapter in the Git repository: https://github.com/microservices-for-enterprise/samples.git . We will update the examples and the corresponding README files in the Git repository to reflect any changes related to the tools, libraries, and frameworks used in this book.

Spring Boot

Spring Boot is a microservice framework built on top of the Spring platform by taking an opinionated view of the Spring platform and third-party libraries, so that you can get started with minimum fuss. Spring Boot makes it easy to create standalone, production-grade Spring based applications that you can just run. You can write most of the Spring Boot microservices with no or minimal spring configuration. Spring Boot tries to make microservices implementation trivial by offering the following capabilities:
  • Create stand-alone Spring applications.

  • Embed Tomcat, Jetty, or Undertow directly (no need to deploy WAR files).

  • Provide opinionated starter POMs to simplify your Maven configuration.

  • Automatically configure Spring whenever possible.

  • Provide production-ready features such as metrics, health checks, and externalized configuration.

  • Absolutely no code generation and no requirement for XML configuration.

Let’s discuss some of the key features that are essential for integrating microservices offered by Spring Boot.

RESTful Services

You can build composition microservices using Spring Boot’s service development features. For example, you can build a simple RESTful service using Spring Boot as follows.

Suppose that you need to build an HTTP RESTful service that handles GET requests for a /greeting endpoint, optionally with a name parameter in the query string. The GET request should return a 200 OK response with a JSON payload in the body that represents a greeting. As the first step, to model the greeting, you need to define a representation for the greeting as follows.
package com.apress.ch07;
public class Greeting {
    private final long id;
    private final String content;
    public Greeting(long id, String content) {
        this.id = id;
        this.content = content;
    }
    public long getId() {
        return id;
    }
    public String getContent() {
        return content;
    }
}
Then you can create the service and resource that will serve the greeting requests. In Spring’s approach to building RESTful web services, we have a controller to handle HTTP requests. These components are easily identified by the @RestController annotation, and the GreetingController class handles the GET requests for the /greeting endpoint (context) by returning a new instance of the Greeting class:
package com.apress.ch07;
import java.util.concurrent.atomic.AtomicLong;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class GreetingController {
    private static final String template = "Hello, %s!";
    private final AtomicLong counter = new AtomicLong();
    @RequestMapping("/greeting")
    public Greeting greeting(@RequestParam(value="name",
                 defaultValue="World") String name) {
        return new Greeting(counter.incrementAndGet(),
                            String.format(template, name));
    }
}

Since the response format is a POJO, it is explicitly converted to a JSON. If you want to control it, you can do this with @GetMapping(path = "/hello", produces=MediaType.APPLICATION_JSON_VALUE) at the request-mapping level.

You can try this example from our examples in ch07/sample01.

Network Communication Abstractions

Consuming and producing data over the network is supported in Spring Boot via numerous abstractions, as discussed next.

HTTP
You can consume a RESTful service with Spring REST templates. Here we have specified the POJO that we need to convert the response to. The getForObject retrieves a representation by doing a GET on the URL. The response (if any) is converted and returned.
RestTemplate restTemplate = new RestTemplate();
        Quote quote = restTemplate.getForObject("http://gturnquist-quoters.cfapps.io/api/random", Quote.class);
        log.info(quote.toString());

Similarly, RestTemplate also supports other HTTP verbs such as POST, PUT, and DELETE. When it comes to create a service that exposes a RESTful service, you can use Spring’s support for embedding the Tomcat servlet container as the HTTP runtime, instead of deploying to an external instance. You can try this example from our examples in ch07/sample02.

JMS
Spring Boot also provides abstractions to integrate with other network communication protocols and systems. For example, you can create a service that consumes messages via JMS as follows:
@JmsListener(destination = "mailbox", containerFactory = "myFactory")
public void receiveMessage(Email email) {
  System.out.println("Received <" + email + ">");
}

The JmsListener annotation defines the name of the Destination that this method should listen to and the reference to the JmsListenerContainerFactory is used to create the underlying message listener container. Passing a value to the containerFactory attribute is not necessary unless you need to customize the way the container is built, as Spring Boot registers a default factory if necessary.

The messages can be produced using JMSTemplates.
public class JmsQueueSender {
    private JmsTemplate jmsTemplate;
    private Queue queue;
    public void setConnectionFactory(ConnectionFactory cf) {
        this.jmsTemplate = new JmsTemplate(cf);
    }
    public void setQueue(Queue queue) {
        this.queue = queue;
    }
    public void simpleSend() {
        this.jmsTemplate.send(this.queue, new MessageCreator() {
            public Message createMessage(Session session) throws JMSException {
                return session.createTextMessage("hello queue world");
            }
        });
    }
}

The JmsTemplate contains many convenience methods to send a message. There are send methods that specify the destination using a javax.jms.Destination object and those that specify the destination using a string for use in a JNDI lookup. You can try this example from our examples in ch07/sample03.

Databases/JDBC
For integrating your microservices with databases via JDBC, Spring provides a template class called JdbcTemplate that makes it easy to work with SQL relational databases and JDBC. Most generic JDBC code is full of resource acquisition, connection management, exception handling, and general error checking that is wholly unrelated to what the code is meant to achieve. The JdbcTemplate takes care of all of that for you.
jdbcTemplate.query(
                "SELECT id, first_name, last_name FROM customers WHERE first_name = ?", new Object[] { "Josh" },
                (rs, rowNum) -> new Customer(rs.getLong("id"), rs.getString("first_name"), rs.getString("last_name"))
        ).forEach(customer -> log.info(customer.toString()));

You can try this example from our examples in ch07/sample04. In addition to what we have mentioned, Spring provides the ability to integrate with numerous other network protocols.

Web APIs: Twitter
Integrating with web APIs such as Twitter is something that is supported by Spring Boot’s libraries dedicated to each web API. For instance, connecting and tweeting from your Spring Boot microservice is pretty straightforward. You just need to initiate TwitterTemplate and call the required operations of it. You can try this example from our examples in ch07/sample05.
Twitter twitter = new TwitterTemplate(consumerKey, consumerSecret);
twitter.timelineOperations().updateStatus("Microservices for Enterprise.!")

As you can see, Spring Boot offers one of the most comprehensive set of capabilities to integrate your microservices with other systems and APIs.

Resiliency Patterns

Spring Boot leverages libraries such as Netflix Hystrix to allow resilient microservices communication. Spring Cloud Netflix Hystrix implementation looks for any method annotated with the @HystrixCommand annotation and wraps that method in a proxy connected to a circuit breaker so that Hystrix can monitor it. For example, in the following code snippet, the method that invokes the external RESTful service is annotated with the @HystrixCommand annotation.
package hello;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;
import java.net.URI;
@Service
public class BookService {
  private final RestTemplate restTemplate;
  public BookService(RestTemplate rest) {
    this.restTemplate = rest;
  }
  @HystrixCommand(fallbackMethod = "reliable")
  public String readingList() {
    URI uri = URI.create("http://localhost:8090/recommended");
    return this.restTemplate.getForObject(uri, String.class);
  }
  public String reliable() {
    return "Microservices for Enterprise (APress)";
  }
}

We’ve applied @HystrixCommand to our original readingList() method. We also have a new method here, called reliable(). The @HystrixCommand annotation has reliable as its fallbackMethod, so if, for some reason, Hystrix opens the circuit on readingList(), we’ll have a default result to be shown. You can try this example from our examples in ch07/sample06.

Data Formats

Spring Boot allows you to write your microservices so that they can produce, consume, and transform multiple data formats, by primarily using Jackson3 data processing tools.
// Java Object to JSON
ObjectMapper objectMapper = new ObjectMapper();
Car car = new Car("yellow", "renault");
objectMapper.writeValue(new File("target/car.json"), car);
// JSON to Java Object
String json = "{ "color" : "Black", "type" : "BMW" }";
Car car = objectMapper.readValue(json, Car.class);

Jackson provides a comprehensive set of data processing capabilities for a diverse set of data types, including the flagship streaming JSON parser/generator library, matching data-binding library (POJOs to and from JSON), and additional data format modules to process data encoded in Avro, BSON, CBOR, CSV, Smile, (Java) Properties, Protobuf, XML or YAML. It even provides a large set of data format modules to support data types of widely used data types such as Guava, Joda, Pcollections, and many, many others. You can try this example from our examples in ch07/sample07.

Observability

You can enable metrics, logging, and distributed tracing for your Spring Boot microservices applications. It requires minimal changes in your microservices application to make your Spring Boot integration microservices observable. When we delve deep into observability concepts in Chapter 13, “Observability,” we discuss these capabilities in detail.

Dropwizard

Dropwizard is another popular microservice development framework. The main objective of Dropwizard is to provide performant, reliable implementations of everything a production-ready web application needs. Because this functionality is extracted into a reusable library, your application remains lean and focused, reducing both time-to-market and maintenance burdens.

Dropwizard uses the Jetty HTTP library to embed a tuned HTTP server directly into your project. Jersey is used as the RESTful web application development engine, while Jackson handles the data formats. There are several other libraries that are bundled with Dropwizard by default. However, unlike Spring Boot, for certain microservice integrations that require integration with multiple network protocols and web APIs, there’s a limited set of features offered from it out-of-the-box.

Apache Camel and Spring Integration

Apache Camel is a conventional integration framework designed to address the centralized integration/ESB needs. The key objectives of the Apache Camel integration framework is to provide an easy-to-use mechanism for implementing Enterprise Integration Patterns-EIPs such as content-based routing, transformations, protocol switching, scatter-gather, etc., in a trivial way with a small footprint and overhead, embeddable in your existing microservices.

Given its Domain Specific Language (DSL) capabilities to work with multiple languages such as Java, Scala, etc., and lightweight integration framework nature, there are quite a few microservices use cases that Apache Camel can address. Apache Camel has components that can consume and produce messages via almost all the popular network protocols, web APIs, and systems. You can build a Camel based self-contained runtime that can run on a container. (In fact, there are on going efforts on building a container native runtime that natively runs on Kubernetes, called Camel-K). For example, in the following example, you can find a Camel DSL of an integration use case that involves multiple EIPs.
public void configure() {
  from("direct:cafe")
    .split().method("orderSplitter")
    .to("direct:drink");
  from("direct:drink").recipientList().method("drinkRouter");
  from("seda:coldDrinks?concurrentConsumers=2")
    .to("bean:barista?method=prepareColdDrink")
    .to("direct:deliveries");
  from("seda:hotDrinks?concurrentConsumers=3")
    .to("bean:barista?method=prepareHotDrink")
    .to("direct:deliveries");
  from("direct:deliveries")
    .aggregate(new CafeAggregationStrategy())
      .method("waiter", "checkOrder").completionTimeout(5 * 1000L)
    .to("bean:waiter?method=prepareDelivery")
    .to("bean:waiter?method=deliverCafes");
}

Also, Apache Camel offers seamless integration with Spring Boot, which makes a powerful combination to facilitate microservice integration. You can try this example from our examples in ch07/sample08.

Spring Integration is quite similar to Apache Camel and it extends the Spring programming model to support well-known EIPs. Spring Integration enables lightweight messaging within Spring-based applications and supports integration with external systems via declarative adapters. Those adapters provide a higher level of abstraction over Spring's support for remoting, messaging, and scheduling. Spring Integration's primary goal is to provide a simple model for building enterprise integration solutions while maintaining the separation of concerns that is essential for producing maintainable, testable code.

The following code is an example DSL of a Spring Integration-based use case, which is similar to what we have seen with Camel.
  @MessagingGateway
  public interface Cafe {
      @Gateway(requestChannel = "orders.input")
      void placeOrder(Order order);
  }
  private AtomicInteger hotDrinkCounter = new AtomicInteger();
  private AtomicInteger coldDrinkCounter = new AtomicInteger();
  @Bean(name = PollerMetadata.DEFAULT_POLLER)
  public PollerMetadata poller() {
      return Pollers.fixedDelay(1000).get();
  }
  @Bean
  public IntegrationFlow orders() {
      return f -> f
        .split(Order.class, Order::getItems)
        .channel(c -> c.executor(Executors.newCachedThreadPool()))
        .<OrderItem, Boolean>route(OrderItem::isIced, mapping -> mapping
          .subFlowMapping("true", sf -> sf
            .channel(c -> c.queue(10))
            .publishSubscribeChannel(c -> c
              .subscribe(s ->
                s.handle(m -> sleepUninterruptibly(1, TimeUnit.SECONDS)))
...

If you compare and contrast Camel and Spring Integration, you may find that Spring Integration DSL exposes the lower-level EIPs (e.g., channels, gateways, etc.), whereas the Camel DSL focuses more on the high-level integration abstractions.

With either Camel or Spring Integration , you can build your microservices integration based on a well defined DSL. However, keep in mind that you are constrained by this DSL and you will have to do a lot of tweaks when you are building real programming logic on top of a DSL.

Also, both these DSLs can become pretty clunky in substantially complex integration scenarios. One could argue that for microservices integration , we can completely omit the usage of EIPs and rather implement them as part of the service code from scratch. So, if your use case needs to use most of the existing EIPs and connectors to various systems, then Camel or Spring Integration is a good choice.

Vert.x

Eclipse Vert.x is event driven, non-blocking, reactive and polyglot software development toolkit, which you can use to build microservices and integrate them. Vert.x is not a restrictive framework (an unopinionated toolkit) and it doesn’t force you to write an application a certain way. You can use Vert.x with multiple languages, including Java, JavaScript, Groovy, Ruby, Ceylon, Scala, and Kotlin.

Vert.x offers a rich set of capabilities for microservices integration. It has several key components and each component addresses a specific set of requirements. Vert.x core provides a fairly low-level set of functionalities for handling HTTP, and for some applications that will be sufficient. However, for microservices that leverage RESTful services concepts in depth, you will need the Vert.x web component. Vert.x-Web builds on top of Vert.x core to provide a richer set of functionalities for building web applications, more easily.
HttpServer server = vertx.createHttpServer();
Router router = Router.router(vertx);
router.route().handler(routingContext -> {
  // This handler will be called for every request
  HttpServerResponse response = routingContext.response();
  response.putHeader("content-type", "text/plain");
  // Write to the response and end it
  response.end("Hello World from Vert.x-Web!");
});
server.requestHandler(router::accept).listen(8080);
We create an HTTP server and a router. Once we’ve done that, we create a simple route with no matching criteria so it will match all requests that arrive on the server. We then specify a handler for that route. That handler will be called for all the requests that arrive on the server. You can further add routing logic , which captures the path parameters and HTTP methods etc. (You can try this example from our examples in ch07/sample09.)
Route route = router.route(HttpMethod.POST, "/catalogue/products/:producttype/:productid/");
route.handler(routingContext -> {
  String productType = routingContext.request().getParam(“producttype”);
  String productID = routingContext.request().getParam("productid");
  // Do something with them...
});
The client-side code is also trivial as Vet.x provides quite a few abstractions to interact with the clients. (You can try this example from our examples in ch07/sample10.)
WebClient client = WebClient.create(vertx);
client
  .post(8080, "myserver.mycompany.com", "/some-uri")
  .sendJsonObject(new JsonObject()
    .put("firstName", "Dale")
    .put("lastName", "Cooper"), ar -> {
    if (ar.succeeded()) {
      // Ok
    }
  });
Vert.x-Web API Contract brings two features to help you to develop your APIs, HTTP Request validation, and OpenAPI 3 support with automatic requests validation. Vert.x also provides different asynchronous clients for accessing various datastores from your microservice. (You can try this example from our examples in ch07/sample11.)
SQLClient client = JDBCClient.createNonShared(vertx, config);
client.getConnection(res -> {
  if (res.succeeded()) {
    SQLConnection connection = res.result();
    connection.query("SELECT * FROM some_table", res2 -> {
      if (res2.succeeded()) {
        ResultSet rs = res2.result();
        // Do something with results
        connection.close();
      }
    });
  } else {
    // Failed to get connection - deal with it
  }
});
Similarly, we can also connect your services with Redis, MongoDB, MySQL, and many more using Vet.x . For microservices integration, resilient inter-service communication capabilities such as circuit breaker are also included as part of Vert.x. (You can try this example from our examples in ch07/sample12.)
CircuitBreaker breaker = CircuitBreaker.create("my-circuit-breaker", vertx,
    new CircuitBreakerOptions().setMaxFailures(5).setTimeout(2000)
);
breaker.<String>execute(future -> {
  vertx.createHttpClient().getNow(8080, "localhost", "/", response -> {
    if (response.statusCode() != 200) {
      future.fail("HTTP error");
    } else {
      response
          .exceptionHandler(future::fail)
          .bodyHandler(buffer -> {
            future.complete(buffer.toString());
          });
    }
  });
}).setHandler(ar -> {
  // Do something with the result
});

Vet.x integration capacities also include gRPC, Kafka, AMQP-based on RabbitMQ, MQTT, STOMP, authentication and authorization, service discovery, and so on. In addition to functional components, all the ecosystem-related capabilities—such as testing, clustering, DevOps, integrating with Docker, observability with metrics and health checks, etc.—absolutely make Vet.x one of the comprehensive microservices and integration frameworks out there.

Akka

Akka is a set of open source libraries for designing scalable, resilient systems that span processor cores and networks. Akka is fully based on the actor model, which is a mathematical model of concurrent computation that treats actors as the universal primitives of concurrent computation. In response to a message that it receives, an actor can make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state but can only affect each other through messages; it avoids the need for any locks.

Akka aims to provide your microservices a multi-threaded behavior without the use of low-level concurrency constructs like atomics or locks—relieving you from even thinking about memory visibility issues, a transparent remote communication between systems and their components, and a clustered, high-availability architecture that is elastic and scales in or out on demand.

You can leverage the Akka HTTP modules to implement HTTP-based services, and it provides a full server and client-side HTTP stack on top of Akka-actor and Akka-stream. It’s not a web-framework but rather a more general toolkit for providing and consuming HTTP-based services.

On top of the Akka HTTP, Akka provides a DSL to describe HTTP routes and how they should be handled. Each route is composed of one or more levels of directives that narrow down to handling one specific type of request.

For example, one route might start by matching the path of the request only if it finds a match to /order, then narrowing it down only to handle HTTP GET requests and then complete those with a string literal, which will be sent back as an HTTP OK with a string as the response body. The route created using the Route DSL is then bound to a port to start serving HTTP requests. JSON support is possible in Akka-http by using Jackson.

In the following use case, we have two separate Akka routes. The first route queries an asynchronous database and marshals the CompletionStage<Optional<Item>> result into a JSON response. The second unmarshalls an Order from the incoming request, saves it to the database, and replies with an OK when done. (You can try this example from our examples in ch07/sample13.)
public class JacksonExampleTest extends AllDirectives {
  public static void main(String[] args) throws Exception {
    ActorSystem system = ActorSystem.create("routes");
    final Http http = Http.get(system);
    final ActorMaterializer materializer = ActorMaterializer.create(system);
    JacksonExampleTest app = new JacksonExampleTest();
    final Flow<HttpRequest, HttpResponse, NotUsed> routeFlow = app.createRoute().flow(system, materializer);
    final CompletionStage<ServerBinding> binding = http.bindAndHandle(routeFlow,
      ConnectHttp.toHost("localhost", 8080), materializer);
    binding
      .thenCompose(ServerBinding::unbind) // trigger unbinding from the port
      .thenAccept(unbound -> system.terminate()); // and shutdown when done
  }
  private CompletionStage<Optional<Item>> fetchItem(long itemId) {
    return CompletableFuture.completedFuture(Optional.of(new Item("foo", itemId)));
  }
  private CompletionStage<Done> saveOrder(final Order order) {
    return CompletableFuture.completedFuture(Done.getInstance());
  }
  private Route createRoute() {
    return route(
      get(() ->
        pathPrefix("item", () ->
          path(longSegment(), (Long id) -> {
            final CompletionStage<Optional<Item>> futureMaybeItem = fetchItem(id);
            return onSuccess(futureMaybeItem, maybeItem ->
              maybeItem.map(item -> completeOK(item, Jackson.marshaller()))
                .orElseGet(() -> complete(StatusCodes.NOT_FOUND, "Not Found"))
            );
          }))),
      post(() ->
        path("create-order", () ->
          entity(Jackson.unmarshaller(Order.class), order -> {
            CompletionStage<Done> futureSaved = saveOrder(order);
            return onSuccess(futureSaved, done ->
              complete("order created")
            );
          })))
    );
  }
}

Akka caters to the specific integration requirements of microservices and other types of integrations via Alpakka initiative. Alpakka enables Akka-based integration of various Akka Streams connectors, integration patterns, and data transformations for integration use cases. Alpakka provides numerous connectors, such as HTTP, Kafka, File, AMQP, JMS, CSV, web APIs (AWS S3, GCP pub-sub, and Slack), MongoDB, etc.

In the following example, you can find a sample AMQP producer and consumer written using Alpakka . (You can try this example from our examples in ch07/sample14.)
// AMQP Producer
final Sink<ByteString, CompletionStage<Done>> amqpSink = AmqpSink.createSimple(
    AmqpSinkSettings.create(connectionProvider)
        .withRoutingKey(queueName)
        .withDeclarations(queueDeclaration)
);
// AMQP Consumer
final Integer bufferSize = 10;
final Source<IncomingMessage, NotUsed> amqpSource = AmqpSource.atMostOnceSource(
    NamedQueueSourceSettings.create(
        connectionProvider,
        queueName
    ).withDeclarations(queueDeclaration),
    bufferSize
);

Here, AmqpSink is a collection of factory methods that facilitates creation of sinks and sources allowing you to fetch messages from AMQP.

Node, Go, Rust, and Python

Node.js is an open source, cross-platform JavaScript runtime environment that executes JavaScript code server-side. Node.js supports building RESTful services out-of-the-box and you can build your service on a full non-blocking I/O model, which leverages the event loop. (When Node.js starts, it initializes the event loop, processes the provided input script—which may make async API calls, schedules timers, or calls process.nextTick()—and then begins processing the event loop.)

The following code shows a simple Echo service built with Node.js. (You can try this example from our examples in ch07/sample15.)
const http = require('http');
http.createServer((request, response) => {
  if (request.method === 'POST' && request.url === '/echo') {
    let body = [];
    request.on('data', (chunk) => {
      body.push(chunk);
    }).on('end', () => {
      body = Buffer.concat(body).toString();
      response.end(body);
    });
  } else {
    response.statusCode = 404;
    response.end();
  }
}).listen(8080);

In addition to the standard set of features, Node.js has a diverse ecosystem that allows you to integrate a microservice based on Node.js with almost any of the other network protocols, databases, web APIs, and other systems.

There are multiple frameworks built on top of Node.js, such as Restify, which is a web services framework optimized for building semantically correct RESTful web services ready for production use at scale. Similarly, there are numerous libraries and packages available for Node.js on NPM (NPM is a package manager for Node.js packages.). For example, you can find NPM packages for Kafka integration (kafka-node), AMQP (node-amqp), circuit breaker, and instrumentation libraries for most of the popular observability tools.

Go4 is also quite commonly used for microservices development and offers a rich set of packages for network communication.

Similarly, other programing languages, such as Rust and Python, offer quite a few out-of-the-box capabilities to build microservices and integrate microservices. For Rust, we have Rocket, a web framework that makes it simple to write fast web applications without sacrificing flexibility or type safety. The Rust ecosystem components address most of the integration of Rust applications with other network protocols, data, web APIs, and other systems. However, some developers claim that Rust is too low level to be a microservices development lanauge. So, we recommend you to give it a try with some use cases prior to fully adopting it.

Similarly, Python has a broad community of production ready frameworks, such as Flask, for microservice development and integration.

Ballerina

Ballerina5 is an emerging integration technology which is built as a programming language, and it aims to fill the gap between integration products and general-purpose programming languages by making it easy to write programs that integrate and orchestrate across distributed microservices and endpoints in a type-safe and resilient manner.

At the time this book was written, Ballerina is in 0.981 version. Most of the programming constructs are final but some are subjected to change and the language is yet to be widely adopted across the microservices communities.

Disclaimer

The authors of this book have contributed to the design and development of Ballerina. As we thrive to keep the contents of this book technology and vendor neutral, we will not compare and contrast Ballerina with other similar technologies. We highly encourage the readers to select the most appropriate technology after doing a thorough evaluation of their use cases and the potential technologies to realize those use cases.

Both the code and the graphical syntax in Ballerina are inspired from how the independent parties communicate via interactions in a sequence diagram.

In the graphical syntax, Ballerina represents clients, workers, and remote systems as different actors in the sequence diagram. For example, as shown in Figure 7-11, the interaction between the client/caller, service, worker, and another external endpoints can be represented using a sequence diagram. Each endpoint is represented as an actor in the sequence diagram and actions are represented as interactions between those actors.
../images/461146_1_En_7_Chapter/461146_1_En_7_Fig11_HTML.jpg
Figure 7-11

Ballerina graphical syntax and source syntax is based on a sequence diagram metaphor

In the code, the remote endpoints are interfaced via endpoints that offer type-safe actions and worker’s logic is written as sequential code inside a resource or a function. You can define a service and bind it to a server endpoint (for example, an HTTP server endpoint can listen on a given HTTP port). Each service contains one or more resources that we write to the sequential code related to a worker and run by a dedicated worker thread.

The following code snippet shows a simple HTTP service that accepts an HTTP GET request, and then invokes another external service to retrieve some information and send it back to the client. (You can try this from our examples in ch07/sample16.)
import ballerina/http;
import ballerina/io;
endpoint http:Listener listener {
    port:9090
};
@http:ServiceConfig {
    basePath:"/time"
}
service<http:Service> timeInfo bind listener {
    @http:ResourceConfig {
        methods:["GET"],
        path:"/"
    }
    getTime (endpoint caller, http:Request req) {
        endpoint http:Client timeServiceEP {
            url:"http://localhost:9095"
        };
        http:Response response = check
                     timeServiceEP -> get("/localtime");
        json time = check response.getJsonPayload();
        json payload = {
                           source: "Ballerina",
                           time: time
                       };
        response.setJsonPayload(untaint payload);
        _ = caller -> respond(response);
    }
}

Network-Aware Abstractions

Ballerina is designed for integration of disparate services, systems, and data. Hence, Ballerina provides native network-aware constructs that provide abstraction for interaction with endpoints via disparate network protocols. Ballerina offers out-of-the-box support for most of the standard network communication protocols.
endpoint http:Client timeServiceEP {
           url:"http://localhost:9095"
};
...
http:Response response = check
                    timeServiceEP -> get("/localtime");
endpoint mysql:Client testDB {
    host: "localhost",
    port: 3306,
    name: "testdb",
    username: "root",
    password: "root",
    poolOptions: { maximumPoolSize: 5 },
    dbOptions: { useSSL: false }
};
...
var selectRet = testDB->select("SELECT * FROM student", ());
// Kafka producer endpoint
endpoint kafka:SimpleProducer kafkaProducer {
    bootstrapServers: "localhost:9092",
    clientID:"basic-producer",
    acks:"all",
    noRetries:3
};
...
 // Produce the message and publish it to the Kafka topic
kafkaProducer->send(serializedMsg, "product-price", partition = 0);
As in the previous case, you can leverage server connectors to receive messages via those protocols and bind them to the service that intends to consume those messages. Most of the implementation details of consuming messages via a given protocol are transparent to the developer.
// Server endpoint configuration.
endpoint grpc:Listener ep {
   host:"localhost",
   port:9090
};
// The gRPC service that binds to the server endpoint.
service SamplegRPCService bind ep {
   // A resource that accepts a string message.
   receiveMessage (endpoint caller, string name) {
       // Print the received message.        foreach record in records {
            blob serializedMsg = record.value;
            // Convert the serialized message to string message
            string msg = serializedMsg.toString("UTF-8");
            log:printInfo("New message received from the product admin");
...
endpoint jms:SimpleQueueReceiver consumer {
    initialContextFactory:"bmbInitialContextFactory",
    providerUrl:"amqp://admin:admin@carbon/carbon"
                + "?brokerlist='tcp://localhost:5672'",
    acknowledgementMode:"AUTO_ACKNOWLEDGE",
    queueName:"MyQueue"
};
service<jms:Consumer> jmsListener bind consumer {
    onMessage(endpoint consumer, jms:Message message) {
        match (message.getTextMessageContent()) {
            string messageText => log:printInfo("Message : " + messageText);

Resilient and Safe Integration

The integration microservices you write using Ballerina are inherently resilient. You can make invocations of an external endpoint in a resilient and type-safe manner.

For example, when you invoke an external endpoint that may be unreliable, you can circumvent such interactions with a circuit breaker for the specific protocol that you are using. This is as trivial as passing a few additional parameters to your client endpoint code.
// circuit breaker example
endpoint http:Client backendClientEP {
    circuitBreaker: {
        rollingWindow: { // failure calculation window
            timeWindowMillis:10000,
            bucketSizeMillis:2000
        },
        failureThreshold:0.2, // failure percentage threshold to open the circuit
        resetTimeMillis:10000, // time it takes to bring the circuit from open to half-close state
        statusCodes:[400, 404, 500] // HTTP status codes that are considered as failures
    }
    url: "http://localhost:8080",
    timeoutMillis:2000,
};

By design, Ballerina code you write will not require specific tools for checking for vulnerabilities or best practices. For example, a common issue in building distributed systems is that data coming over the wire cannot be trusted not to include injection attacks. Ballerina assumes that all data coming over the wire is tainted. And compilation-time checks prevent code that requires untainted data from accessing tainted data. Ballerina offers such capabilities as the built-in construct of the language, so that the programmer is enforced to write secure code.

Data Formats

Ballerina has a structural type system with primitive, record, object, tuple, and union types. This type-safe model incorporates type inference at assignment and provides numerous compile-time integrity checks for connectors, logic, and network-bound payloads.

The code that integrates services and systems often has to deal with complex distributed errors. Ballerina has error-handling capabilities based on union types. Union types explicitly capture the semantics without requiring developers to create unnecessary wrapper types. When you decide to pass the errors back to the caller, you can use the check operator.

For example, when you have a JSON data received over a message, you can cast it to a type that you have defined as part of your logic. Then you can safely cast the two types by handling the possible error as part of a match clause written against that union type.
// this is a simple structured object definition in Ballerina
// it can be automatically mapped into JSON and back again
type Payment {
    string name,
    string cardnumber,
    int month,
    int year,
    int cvc;
};
...
json payload = check request.getJsonPayload();
        // The next line shows typesafe parsing of JSON into an object
        Payment|error p = <Payment>payload;
        match p {
            Payment x => {
                io:println(x);
                res.statusCode = 200;
                // return the JSON that has been created
                res.setJsonPayload(check <json>x);
            }
            error e => {
                res.statusCode = 400 ;
                // return the error message if the JSON failed to parse
                res.setStringPayload(e.message);
            }
            _ = caller -> respond (res);

Observablity

Monitoring, logging, and distributed tracing are key methods that reveal the internal state of the Ballerina code to provide the observability. Ballerina provides out-of-the-box capabilities to work with observability tools, such as Prometheus, Grafana, Jeager, and Elastic Stack, all with minimal configuration.

Workflow Engine Solutions

We conclude our discussion of microservice integration technologies with technologies that are specifically designed for microservices that require workflows (i.e., long-running stateful processes that may also need some human interactions). Building workflows in a microservices architecture is a special case of integration requirements. There are quite a few new and existing solutions morphing into the microservices workflow domain. Zeebe, Netflix Conductor, Apache Nifi, AWS Step Functions, Spring Cloud Data Flow, and Microsoft Logic Apps are good examples.

Zeebe6 supports the stateful orchestration of workers and microservices using visual workflows (developed by Camunda, which is a popular open source Business Process Model and Notation—BPMN—solution). It allows users to define orchestration flows visually using BPMN 2.0 or YAML. Zeebe ensures that, once started, flows are always carried out fully, retrying steps in case of failures. Along the way, Zeebe maintains a complete audit log, so that the progress of flows can be monitored and tracked. Zeebe is a Big Data system and scales seamlessly with growing transaction volumes. (You can try a Zeebe example from our examples in ch07/sample17.)

Netflix Conductor7 is an open source workflow engine that uses a JSON DSL to define a workflow. Conductor allows creating complex process/business flows in which an individual task is implemented by a microservice.

Apache NiFi8 is a conventional integration framework that supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.

Inception of the Service Mesh

What we have seen so far in this chapter is that microservices have to work with other microservices, data, web APIs, and other systems. Since we don’t use a centralized ESB as a bus to connect all these services and systems, the inter-services communication is now part of the service developer. Although many microservices frameworks address most of such needs, it’s still a daunting task for the service developer to take care of all the requirements of integrating microservices.

To overcome this problem, architects identify that some of the inter-service communication functionalities can be treated as commodity features and the service code can be made independent from them. The core concept of a Service Mesh is to identify the commodity network communication functionalities such as circuit breaker, timeouts, basic routing, service discovery, secure communication, observability, etc. and implement them at a component called a sidecar, which you run alongside the microservices you develop. These sidecars are controlled by a central management called a control plane.

With the advent of the Service Mesh, the microservice developers get more freedom when it comes to polyglot service development technologies and they focus less on the inter-service communication. They can focus more on the business capabilities of a service that they develop.

We discuss the Service Mesh in detail in Chapter 9, “Service Mesh,” and delve deep into some of the existing service mesh implementations.

Summary

In this chapter, we took an in-depth analysis of the microservices integration challenges. With the omission of ESB, we need to practice the smart endpoints and dumb client philosophy when we develop services. With this approach, most of the ESB capabilities now need to be supported at the microservices that we develop. We identified some of the commonly used microservice integration patterns: active compositions/orchestrations, reactive compositions/choreography, and a hybrid approach. It is more pragmatic to stick to a hybrid approach of integrating microservices and select the integration patterns based on your use cases.

In order to facilitate microservices integration, the microservices development frameworks need to have a unique set of capabilities such as built-in network communication abstractions, support for resiliency patterns, native support to data types, ability to govern integration microservices, cloud and container native nature, etc. With respect to those requirements, we had a detailed discussion of some of the key microservice implementation technologies. With the inception of the Service Mesh, some of the microservice integration requirement can be offloaded to a distributed network communication abstraction, which is executed as a sidecar alongside each service and controlled by a centralized control plane. In the next chapter, we discuss how to deploy and run microservices.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset