3
Architecture of a microservice application

This chapter covers

  • The big picture view of a microservice application
  • The four tiers of microservice architecture: platform, service, boundary, and client
  • Patterns for service communication
  • Designing API gateways and consumer-driven façades as application boundaries

In chapter 2, we designed a new feature for SimpleBank as a set of microservices and discovered that deep understanding of the application domain is one of the keys to a successful implementation. In this chapter, we’ll look at the bigger picture and consider the design and architecture of an entire application made up of microservices. We can’t give you a deep understanding of the domain your own application lives in, but we can show you how having such an understanding will help you build a system that’s flexible enough to grow and evolve over time.

You’ll see how a microservice application is typically designed to have four tiers — platform, service, boundary, and client — and you’ll learn what they are and how they combine to deliver customer-facing applications. We’ll also highlight the role of an event backbone in building a large-scale microservice application and discuss different patterns for building application boundaries, such as API gateways. Lastly, we’ll touch on recent trends in building user interfaces for microservice applications, such as micro-frontends and frontend composition.

3.1 Architecture as a whole

As a software designer, you want to build software that’s amenable to change. Many forces put pressure on your software: new requirements, defects, market demands, new customers, growth, and so on. Ideally, you can respond to these pressures at a steady pace and with confidence. For you to be able to do that, your development approach should reduce friction and minimize risk.

Your engineering organization will want to remove any roadblocks to development as time goes by and the system evolves. You want to be able to quickly and seamlessly replace any system’s component that becomes obsolete. You want to have teams in place that can become completely autonomous and responsible for portions of a larger system. And you want those teams to coexist without the need for constant synchronization and without blocking other teams. For that, you need to think about architecture: your plan for building an application.

3.1.1 From monolith to microservices

With a monolithic application, your primary deliverable is a single application. That application is split horizontally into different technical layers — in a typical three-tier application, they’d be data, logic, and presentation (figure 3.1) — and vertically into different business domains. Patterns like MVC and frameworks like Rails and Django reflect the three-tier model. Each tier provides services to the tier above: the data tier provides persistent state; the logic tier executes useful work; and the presentation layer presents the results back to the end user.

An individual microservice is similar to a monolith: it stores data, performs some business logic, and returns data and outcomes to consumers through APIs. Each microservice owns a business or technical capability of the application and interacts with other microservices to execute work. Figure 3.2 illustrates the high-level architecture of an individual service.

In a monolithic application, your architecture is limited to the boundaries of the application itself. In a microservice application, you’re planning for something that’ll keep evolving both in size and breadth. Think of it like a city: building a monolith is like building a skyscraper; whereas building a microservice application is like building a neighborhood: you need to build infrastructure (plumbing, roads, cables) and plan for growth (zone for small businesses versus houses).

This analogy highlights the importance of considering not only the components themselves, but also the way they connect, where they’re placed, and how you can build them concurrently. You want your plan to encourage growth along good lines, rather than dictate or enforce a certain structure on your overall application.

c03_01.png

Figure 3.1 The architecture of a typical three-tier monolithic application

c03_02.png

Figure 3.2 The high-level architecture of an individual microservice

Mostly importantly, you don’t run microservices in isolation; each microservice lives in an environment that enables you to build, deploy, and run it, in concert with other microservices. Your application architecture should encompass that whole environment.

3.1.2 The role of an architect

Where do software architects fit in? Many enterprises employ software architects, although the effectiveness of and the approach to this role varies wildly.

Microservice applications enable rapid change: they evolve over time as teams build new services, decommission existing services, refactor existing functionality, and so on. As an architect or technical lead, your job is to enable evolution, rather than dictate design. If the microservice application is a city, then you’re a planner for the city council.

An architect’s role is to make sure the technical foundations of the application support a fast pace and fluidity. An architect should have a global perspective and make sure the global needs of the application are met, guiding its evolution so that

  • The application is aligned to the wider strategic goals of the organization.
  • Teams share a common set of technical values and expectations.
  • Cross-cutting concerns — such as observability, deployment, and interservice communication — meet the needs of multiple teams.
  • The whole application is flexible and malleable in the face of change.

To achieve these things, an architect should guide development in two ways:

  • Principles — Guidelines that the team should follow to achieve higher level technical or organizational goals
  • Conceptual models — High-level models of system relationships and application-level patterns

3.1.3 Architectural principles

Principles are guidelines (or sometimes rules) that teams should follow to achieve higher level goals. They inform team practice. Figure 3.3 illustrates this model. For example, if your product goal is to sell to privacy- and security-sensitive enterprises, you might set the following principles:

  • Development practices must comply with recognized external standards (for example, ISO 27001).
  • All data must be portable and stored with retention limits in mind.
  • Personal information must be clearly tracked and traceable through the application.

Principles are flexible. They can and should change to reflect the priorities of the business and the technical evolution of your application. For example, early development might prioritize validating product-market fit, whereas a more mature application might require a focus on performance and scalability.

c03_03.png

Figure 3.3 An architectural approach based on technical principles

3.1.4 The four tiers of a microservice application

Architecture should reflect a clear high-level conceptual model. A model is a useful tool for reasoning about an application’s technical structure. A multi-tiered model, like the three-tier model outlined in figure 3.1, is a common approach to application structure, reflecting layers of abstraction and responsibility within an overall system.

In the rest of this chapter, we’ll explore a four-tier model for a microservice application:

  • Platform — A microservice platform provides tooling, infrastructure, and high-level primitives to support the rapid development, operation, and deployment of microservices. A mature platform layer enables engineers to focus on building features, not plumbing.
  • Services — In this tier, the services that you build interact with each other to provide business and technical capabilities, supported by the underlying platform.
  • Boundary — Clients will interact with your application through a defined boundary that exposes underlying functionality to meet the needs of outside consumers.
  • Client — Client applications, such as websites and mobile applications, interact with your microservice backend.

Figure 3.4 illustrates these architectural layers. You should be able to apply them to any microservice application, regardless of underlying technology choices.

c03_04.png

Figure 3.4 A four-tiered model of microservice application architecture

Each layer is built on the capabilities of the layers below; for example, individual services take advantage of deployment pipelines, infrastructure, and communication mechanisms that the underlying microservice platform provides. A well-designed microservice application requires sophistication and investment at all layers.

Great! So now you have a model you can work with. In the next five sections, we’ll walk through each layer in this architectural model and discuss how it contributes to building sustainable, flexible, and evolutionary microservice applications.

3.2 A microservice platform

Microservices don’t live in isolation. A microservice is supported by infrastructure:

  • A deployment target where services are run, including infrastructure primitives, such as load balancers and virtual machines
  • Logging and monitoring aggregation to observe service operation
  • Consistent and repeatable deployment pipelines to test and release new services and versions
  • Support for secure operation, such as network controls, secret management, and application hardening
  • Communication channels and service discovery to support service interaction

Figure 3.5 illustrates these capabilities and how they relate to the service layer of the application. If each microservice is a house, then the platform provides roads, water, electricity, and telephone cables.

c03_05.png

Figure 3.5 The capabilities of a microservice platform

A robust platform layer decreases overall implementation cost, increases overall stability, and enables rapid service development. Without this platform, product developers would need to repeatedly write plumbing code themselves, taking energy away from delivering new features and business impact. The average developer shouldn’t need to be an expert in the intricacies of every layer of the application. Ultimately, a semi-independent, specialist team can develop the platform layer to meet the needs of multiple teams working in the service layer of the application.

3.2.1 Mapping your runtime platform

A microservice platform will help you be confident that you can trust the services your team writes to serve production workloads and be resilient, transparent, and scalable. Figure 3.6 maps out a runtime platform for a microservice.

A runtime platform (or deployment target) — for example, a cloud environment like AWS or a platform as a service (PaaS) like Heroku — provides infrastructure primitives necessary to run multiple service instances and route requests between them. In addition, it provides mechanisms for providing configuration — secrets and environment-specific variables — to service instances.

You build the other elements of a microservice platform on top of this foundation. Observability tools collect and correlate data from services and underlying infrastructure. Deployment pipelines manage the upgrade (or rollback) of this stack.

c03_06.png

Figure 3.6 A deployment configuration for a microservice running in a typical cloud environment

3.3 Services

The service layer has perhaps the most self-explanatory name — this is where your services live. At this tier, services interact to perform useful work, relying on the underlying platform abstractions for reliable operation and communication and exposing their work through the boundary layer to application clients. We also consider components that are logically internal to a service, such as data stores, to be part of this tier.

The structure of your service tier will differ widely depending on the nature of your business. In this section, we’ll discuss some of the common patterns you’ll encounter:

  • Business and technical capabilities
  • Aggregation and higher order services
  • Services on critical and noncritical paths

3.3.1 Capabilities

The services you write will implement different capabilities:

  • A business capability is something that an organization does to generate value and meet business goals. Microservices that you scope to business capabilities directly reflect business goals.
c03_07.png

Figure 3.7 Microservices implementing business or technical capabilities

  • A technical capabilitysupports other services by implementing a shared technical feature.

Figure 3.7 compares these two types of capability. SimpleBank’s orders service exposes a capability for managing order execution — this is a business capability. The market service is a technical capability; it provides a gateway to a third party that other services (such as exposing market information or settling trades) can reuse.

3.3.2 Aggregation and higher order services

In the early days of a microservice application, your services are likely to be flat; each service is likely to have a similar level of responsibility. For example, the services in chapter 2 — orders, fees, transactions, and accounts — are scoped at a roughly equivalent level of abstraction.

As the application grows, you’ll encounter two pressures on the growth of services:

  • Aggregating data from multiple services to serve client requests for denormalized data (for example, returning orders and fees together)
  • Providing specialized business logic that takes advantage of underlying capabilities (for example, placing a specific type of order)

Over time, these two pressures will lead to a hierarchy of services. Services that are closer to the system boundary will interact with several services to aggregate their output — let’s call those aggregators (figure 3.8). In addition, specialized services may act as coordinators for the work of multiple lower order services.

c03_08.png

Figure 3.8 An aggregator serves queries by joining data from underlying services, and a coordinator orchestrates behavior by issuing commands to downstream services.

The challenge you’ll face is to determine when new data requirements or new application behavior requires a new service, rather than changes to an existing service. Creating a new service increases overall complexity and may result in tight coupling, but adding functionality to an existing service may make it less cohesive and more difficult to replace. That would bend a fundamental microservice principle.

3.3.3 Critical and noncritical paths

As your system evolves, some functions will naturally become more critical to your customer needs — and the successful operation of your business — than others. For example, at SimpleBank, the orders service is on the critical path for order placement. Without this service operating correctly, you can’t execute customer orders. Conversely, other services are less important; if the customer profile service is unavailable, it’s less likely to affect a critical, revenue-generating component of your offering. Figure 3.9 illustrates example paths at SimpleBank.

This is a double-edged sword. The more services on a critical path, the more likely failure will occur. Because no service is 100% reliable, the cumulative reliability of a service is the product of the reliability of its dependencies.

But microservices allow you to clearly identify these paths and treat them independently, investing more engineering effort to maximize the resiliency and scalability of these paths than you invest in less crucial system areas.

3.4 Communication

Communication is a fundamental element of a microservice application. Microservices communicate with each other to perform useful work. Your chosen methods for microservices to instruct and request action from other microservices determine the shape of the application you build.

c03_09.png

Figure 3.9 Chains of services serve capabilities. Many services will participate in multiple paths.

Communication isn’t an independent architectural layer, but we’ve pulled this out into a separate section because it blurs the boundary between the service and platform layers. Some elements — such as communication brokers — are part of the platform layer. But services themselves are responsible for constructing and sending messages. You want to build smart endpoints but dumb pipes.

In this section, we’ll discuss common patterns for microservice communication and how they impact the flexibility and evolution of a microservice application. Most mature microservice applications will mix both synchronous and asynchronous interaction styles.

3.4.1 When to use synchronous messages

Synchronous messages are often the first design approach that comes to mind. They’re well-suited to scenarios where an action’s results — or acknowledgement of success or failure — are required before proceeding with another action.

Figure 3.10 illustrates a request–response pattern for synchronous messages. The first service constructs an appropriate message to a collaborator, which the application sends using a transport mechanism, such as HTTP. The destination service receives this message and responds accordingly.

c03_10.png

Figure 3.10 A synchronous request–response lifecycle between two communicating services

Choosing a transport

The choice of transport — RESTful HTTP, an RPC library, or something else — will impact the design of your services. Each transport has different properties of latency, language support, and strictness. For example, gRPC provides generated client/server API contracts using Protobufs, whereas HTTP is agnostic to the context of messages. Across your application, using a single method of synchronous transport has economies of scale; it’s easier to reason through, monitor, and support with tooling.

Separation of concerns within microservices is also important. You should separate your choice of transport mechanism from the business logic of your service, which shouldn’t need to know about HTTP status codes or gRPC response streams. Doing so makes it easier to swap out different mechanisms in the future if your application’s needs evolve.

Drawbacks

Synchronous messages have limitations:

  • They create tighter coupling between services, as services must be aware of their collaborators.
  • They don’t have a strong model for broadcast or publish-subscribe models, limiting your capability to perform parallel work.
  • They block code execution while waiting on responses. In a thread- or process-based server model, this can exhaust capacity and trigger cascading failures.
  • Overuse of synchronous messages can build deep dependency chains, which increases the overall fragility of a call path.

3.4.2 When to use asynchronous messages

An asynchronous style of messaging is more flexible. By announcing events, you make it easy to extend the system to handle new requirements, because services no longer need to have knowledge of their downstream consumers. New services can consume existing events without changing existing services.

This style enables more fluid evolution and creates looser coupling between services. This does come at a cost: asynchronous interactions are more difficult to reason through, because overall system behavior is no longer explicitly encoded into linear sequences. System behavior will become increasingly emergent — developing unpredictably from interactions between services — requiring investment in monitoring to adequately trace what’s happening.

Asynchronous messaging typically requires a communication broker, an independent system component that receives events and distributes them to event consumers. This is sometimes called an event backbone, which indicates how central to your application this component becomes (figure 3.11). Tools commonly used as brokers include Kafka, RabbitMQ, and Redis. The semantics of these tools differ: Kafka specializes in high-volume, replayable event storage, whereas RabbitMQ provides higher level messaging middleware (based on the AMQP protocol (https://www.amqp.org/)).

3.4.3 Asynchronous communication patterns

Let’s look at the two most common event-based patterns: job queue and publish-subscribe. You’ll encounter these patterns a lot when architecting microservices — most higher level interaction patterns are built on one of these two primitives.

Job queue

In this pattern, workers take jobs from a queue and execute them (figure 3.12). A job should only be processed once, regardless of how many worker instances you operate. This pattern is also known as winner takes all.

c03_11.png

Figure 3.11 Event-driven asynchronous communication between services

c03_12.png

Figure 3.12 A job queue distributes work to 1 to n consumers

Your market gateway could operate in this fashion. Each order that the orders service creates will trigger an OrderCreated event, which will be queued for the market gateway service to place it. This pattern is useful where

  • A 1:1 relationship exists between an event and work to be done in response to that event.
  • The work that needs to be done is complex or time-consuming, so it should be done out-of-band from the triggering event.

By default, this approach doesn’t require sophisticated event delivery. Many task queue libraries are available that use commodity data stores, such as Redis (Resque, Celery, Sidekiq) or SQL databases.

Publish-subscribe

In publish-subscribe, services trigger events for arbitrary listeners. All listeners that receive the event act on it appropriately. In some ways, this is the ideal microservice pattern: a service can send arbitrary events out into the world without caring who acts on them (figure 3.13).

c03_13.png

Figure 3.13 How publish-subscribe sends events out to subscribers

For example, imagine you need to trigger other downstream actions once an order has been placed. You might send a push notification to the customer or use it to feed your order statistics and recommendation feature. These features can all listen for the same event.

3.4.4 Locating other services

To wrap up this section, let’s take a moment to examine service discovery. For services to communicate, they need to be able to discover each other. The platform layer should offer this capability.

A rudimentary approach to service discovery is to use load balancers (figure 3.14). For example, an elastic load balancer (ELB) on AWS is assigned a DNS name and manages health checking of underlying nodes, based on their membership in a group of virtual machines (an auto-scaling group on AWS).

This works but doesn’t handle more complex scenarios. What if you want to route traffic to different versions of your code to enable canary deployments or dark launches, or if you want to route traffic across different data centers?

A more sophisticated approach is to use a registry, such as Consul (https://www.consul.io). Service instances announce themselves to a registry, which provides an API — either through DNS or a custom mechanism for resolving requests for those services. Figure 3.15 illustrates this approach.

Your service discovery needs will depend on the complexity of your deployed application’s topology. More complex deployments, such as geographical distribution, require more robust service discovery architecture.1 

c03_14.png

Figure 3.14 Service discovery using load balancers and known DNS names

c03_15.png

Figure 3.15 Service discovery using a service registry as a source of truth

3.5 The application boundary

A boundary layer provides a façade over the complex interactions of your internal services. Clients, such as mobile apps, web-based user interfaces, or IoT devices, may interact with a microservice application. (You might build these clients yourself, or third parties consuming a public API to your application may build them.) For example, SimpleBank has internal admin tools, an investment website, iOS and Android apps, and a public API, as depicted in figure 3.16.

The boundary layer provides an abstraction over internal complexity and change (figure 3.17). For example, you might provide a consistent interface for a client to list all historic orders, but, over time, you might completely refactor the internal implementation of that functionality. Without this layer, clients would require too much knowledge of individual services, becoming tightly coupled to your system implementation.

c03_16.png

Figure 3.16 Client applications at SimpleBank

c03_17.png

Figure 3.17 A boundary provides a façade over the service layer to hide internal complexity from a consumer.

Second, the boundary tier provides access to data and functionality using a transport and content type appropriate to the consumer. For example, whereas services might communicate between each other with gRPC, a façade can expose an HTTP API to external consumers, which is much more appropriate for external applications to consume.

Combining these roles allows your application to become a black box, performing whatever (unknown to the client) operations to deliver functionality. You also can make changes to the service layer with more confidence, because the client interfaces with it through a single point.

The boundary layer also may implement other client-facing capabilities:

  • Authentication and authorization — To verify the identity and claims of an API client
  • Rate limiting — To provide defense against client abuse
  • Caching — To reduce overall load on the backend
  • Collect logs and metrics — To allow analysis and monitoring of client requests

Placing these edge capabilities in the boundary layer provides clear separation of concerns — without a boundary, backend services would need to individually implement these concerns, increasing their complexity.

You might also use boundaries within your service tier to separate domains. For example, an order placement process might consist of several services, but only one of those services should expose an entry point that other domains can access (figure 3.18).

That provides an overview for how you can use boundaries. Let’s get more specific and explore three different (albeit related) patterns for application boundaries: API gateways, backends for frontends, and consumer-driven gateways.

c03_18.png

Figure 3.18 Boundaries might be present between different contexts within a microservice application.

3.5.1 API gateways

We introduced the API gateway pattern in chapter 2. An API gateway provides a single client-entry point over a service-oriented backend. It proxies requests to underlying services and transforms their responses. An API gateway might handle other cross-cutting client concerns, such as authentication and request signing.

Figure 3.19 illustrates an API gateway. The gateway authenticates a request, and if that succeeds, it proxies the request to an appropriate backend service. It transforms the results it receives so that when it returns them, they’re palatable for your consuming clients.

c03_19.png

Figure 3.19 An API gateway serving a client request

A gateway also allows you to minimize the exposed area of your system from a security perspective by deploying internal services in a private network and restricting ingress to all but the gateway.

3.5.2 Backends for frontends

The backends for frontends (BFF) pattern is a variation on the API gateway approach. Although the API gateway approach is elegant, it has a few downsides. If the API gateway acts as a composition point for multiple applications, it’ll begin to take on more responsibility.

For example, imagine you serve both desktop and mobile applications. Mobile devices have different needs, displaying less data with less available bandwidth, and different user features, such as location and context awareness. In practice, this means desktop and mobile API needs diverge, which increases the breadth of functionality you need to integrate into a gateway. Different needs, such as the amount of data (and therefore payload size) returned for a given resource, may also conflict. It can be hard to balance these competing forces while building a cohesive and optimized API.

In a BFF approach, you use an API gateway for each consuming client type. To take the earlier example from SimpleBank, each user service they offered would have a unique gateway (figure 3.20).

c03_20.png

Figure 3.20 The backends for frontends pattern for SimpleBank’s client applications

Doing so allows the gateway to be highly specific and responsive to the needs of its consumer without bloat or conflict. This results in smaller, simpler gateways and more focused development.

3.5.3 Consumer-driven gateways

In both previous patterns, the API gateway determines the structure of the data it returns to your consumer. To serve different clients, you might build unique backends. Let’s flip this around. What if you could build a gateway that allowed consumers to express exactly what data they needed from your service? Think of this like an evolution of the BFF approach: rather than building multiple APIs, you can build a single “super-set” API that allows consumers to define the shape of response they require.

You can achieve this using GraphQL. GraphQL is a query language for APIs that allows consumers to specify which data fields they want and to multiplex different resources into a single request. For example, you might expose the following schema for SimpleBank clients.

Listing 3.1 Basic GraphQL schema for SimpleBank

type Account { 
  id: ID!    ①  
  name: String!
  currentHoldings: [Holding]!    ②  
  orders: [Order]!
}

type Order {
  id: ID!
  status: String!
  asset: Asset!
  quantity: Float!
}

type Holding {
  asset: Asset!
  quantity: Float!
}

type Asset {
  id: ID!
  name: String!
  type: String!
  price: Float!
}

type Root {
  accounts: [Account]!    ③  
  account(id: ID): Account    ③  
}

schema: {
  query: Root    ④  
}

This schema exposes a customer’s accounts, as well as orders and holdings against each of those accounts. Clients then execute queries against this schema. If a mobile app screen shows holdings and outstanding orders for an account, you could retrieve that data in a single request, as shown in the following listing.

Listing 3.2 Request body using GraphQL

{
  account(id: "101") {    ①  
    orders    ②  
    currentHoldings    ②  
  }
}

In the backend, your GraphQL server would act like an API gateway, proxying and composing that data from multiple backend services (in this case, orders and holdings). We won’t drill into GraphQL in further detail in this book, but if you’re interested, the official documentation (http://graphql.org/) is a great place to start. We’ve also had some success using Apollo (https://www.apollographql.com/) to provide a GraphQL API façade over RESTful backend services.

3.6 Clients

The client tier, like the presentation layer in the three-tier architecture, presents to your users an interface to your application. Separating this layer from those below it allows you to develop user interfaces in a granular fashion and to serve the needs of different types of clients. This also means you can develop the frontend independently from backend features. As mentioned in the previous section, your application may need to serve many different clients — mobile devices, websites, both internal and external — each with different technology choices and constraints.

It’s unusual for a single microservice to serve its own user interface. Typically, the functionality exposed to a given set of users is broader than the capabilities of a single service. For example, administrative staff at SimpleBank might deal with order management, account setup, reconciliation, tax, and so on. And this comes with cross-cutting concerns — authentication, audit logging, user management — that are clearly not the responsibility of an orders or account setup service.

3.6.1 Frontend monoliths

Your backend is straightforward to split into independently deployable and maintainable services — well, relatively, you still have another 10 chapters to go. But this can be challenging to achieve on the frontend. A typical frontend over a microservice application might still be a monolith that’s deployed and changed as a single unit (figure 3.21). Specialist frontends, particularly mobile applications, often demand dedicated teams, making end-to-end feature ownership difficult to practically achieve.

c03_21.png

Figure 3.21 A typical frontend client in a microservice application can become monolithic.

3.6.2 Micro-frontends

As frontend applications grow larger, they begin to encounter the same coordination and friction issues that plague large-scale backend development. It’d be great if you could split frontend development in the same way you can split your backend services. An emerging trend in web applications is micro-frontends — serving fragments of a UI as independently packaged and deployable components that you can compose together. Figure 3.22 illustrates this approach.

This would allow each microservice team to deliver functionality end to end. For example, if you had an orders team, it could independently deliver both order management microservices and the web interface required to place and manage orders.

c03_22.png

Figure 3.22 A user interface composed from independent fragments

Although promising, this approach has many challenges:

  • Visual and interaction consistency across different components requires nontrivial effort to build and maintain common components and design principles.
  • Bundle size (and therefore load time) can be difficult to manage when loading JavaScript code from multiple sources.
  • Interface reloads and redraws can cause overall performance to suffer.

Micro-frontends aren’t yet commonplace, but people are using several different technical approaches in the wild, including

  • Serving UI fragments as web components with a clear, event-driven API
  • Integrating fragments using client-side includes
  • Using iframes to serve micro-apps into separate screen sections
  • Integrating components at the cache layer using edge side includes (ESI)

If you’re interested in learning more, Micro Frontends (https://micro-frontends.org/) and Zalando’s Project Mosaic (https://www.mosaic9.org/) are great starting points.

Summary

  • Individually, microservices are similar internally to monolithic applications.
  • A microservice application is like a neighborhood: its final shape isn’t prescribed but instead guided by principles and a high-level conceptual model.
  • The principles that guide microservice architecture reflect organizational goals and inform team practices.
  • Your architectural plan should encourage growth along good lines, rather than dictate approaches for your overall application.
  • A microservice application consists of four layers: platform, service, boundary, and client.
  • The platform layer provides tooling, plumbing, and infrastructure to support the development of product-oriented microservices.
  • Synchronous communication is often the first choice in a microservice application and is best suited to command-type interactions, but it has drawbacks and can increase coupling and fragility.
  • Asynchronous communication is more flexible and amenable to rapid system evolution, at the cost of added complexity.
  • Common asynchronous communication patterns include queues and publish-subscribe.
  • The boundary layer provides a façade over your microservice application that’s appropriate for external consumers.
  • Common types of boundaries include API gateways and consumer-driven gateways, such as GraphQL.
  • Client applications, such as websites and mobile applications, interact with your mobile backend through the boundary layer.
  • Clients risk becoming monolithic, but techniques are beginning to emerge for applying microservice principles to frontend applications.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset