2
Microservices at SimpleBank

This chapter covers

  • Introducing SimpleBank, a company adopting microservices
  • Designing a new feature with microservices
  • How to expose microservice-based features to the world
  • Ensuring features are production ready
  • Challenges faced in scaling up microservice development

In Chapter 1, you learned about the key principles of microservices and why they’re a compelling approach for sustainably delivering software value. We also introduced the design and development practices that underpin microservices development. In this chapter, we’ll explore how you can apply those principles and practices to developing new product features with microservices.

Over the course of this chapter, we’ll introduce the fictitious company of SimpleBank. They’re a company with big plans to change the world of investment, and you’re working for them as an engineer. The engineering team at SimpleBank wants to be able to deliver new features rapidly while ensuring scalability and stability — after all, they’re dealing with people’s money! Microservices might be exactly what they need.

Building and running an application made up of independently deployable and autonomous services is a vastly different challenge from building that application as a single monolithic unit. We’ll begin by considering why a microservice architecture might be a good fit for SimpleBank and then walk you through the design of a new feature using microservices. Finally, we’ll identify the steps needed to develop that proof of concept into a production-grade application. Let’s get started.

2.1 What does SimpleBank do?

The team at SimpleBank wants to make smart financial investment available to everyone, no matter how much money they have. They believe that buying shares, selling funds, or trading currency should be as simple as opening a savings account.

That’s a compelling mission, but not an easy one. Financial products have multiple dimensions of complexity: SimpleBank will need to make sense of market rules and intricate regulations, as well as integrate with existing industry systems, all while meeting stringent accuracy requirements.

In the previous chapter, we identified some of the functionality that SimpleBank could offer its customers: opening accounts, managing payments, placing orders, and modeling risk. Let’s expand on those possibilities and look at how they might fit within the wider domain of an investment tool. Figure 2.1 illustrates the different elements of this domain.

As the figure shows, an investment tool will need to do more than offer customer-facing features, like the ability to open accounts and manage a financial portfolio. It also will need to manage custody, which is how the bank holds assets on behalf of customers and moves them in or out of their possession, and manufacture, which is the creation of financial products appropriate to customer needs.

c02_01.png

Figure 2.1 A high-level (and by no means exhaustive) model of functionality that SimpleBank might build

As you can see, it’s not so simple! You can begin to see some of the business capabilities that SimpleBank might implement: portfolio management, market data integrations, order management, fund manufacture, and portfolio analysis. Each of the business areas identified might consist of any number of services that collaborate with each other or services in other areas.

This type of high-level domain model is a useful first step when approaching any system, but it’s crucial when building microservices. Without understanding your domain, you might make incorrect decisions about the boundaries of your services. You don’t want to build services that are anemic — existing only to perform trivial create, read, update, delete (CRUD) operations. These often become a source of tight coupling within an application. At the same time, you want to avoid pushing too much responsibility into a single service. Less cohesive services make software changes slower and riskier — exactly what you’re trying to avoid.

Lastly, without this perspective, you might fall prey to overengineering — choosing microservices where they’re not justified by the real complexity of your product or domain.

2.2 Are microservices the right choice?

The engineers at SimpleBank believe that microservices are the best choice to tackle the complexity of their domain and be flexible in the face of complex and changing requirements. They anticipate that as their business grows, microservices will reduce the risk of individual software changes, leading to a better product and happier customers.

As an example, let’s say they need to process every buy or sell transaction to calculate tax implications. But tax rules work differently in every country — and those rules tend to change frequently. In a monolithic application, you’d need to make coordinated, time-sensitive releases to the entire platform, even if you only wanted to make changes for one country. In a microservice application, you could build autonomous tax-handling services (whether by country, type of tax, or type of account) and deploy changes to them independently.

Is SimpleBank making the right choice? Architecting software always involves tension between pragmatism and idealism — balancing product needs, the pressures of growth, and the capabilities of a team. Poor choices may not be immediately apparent, as the needs of a system vary over its lifetime. Table 2.1 expands on the factors to consider when choosing microservices.

Table 2.1 Factors to consider when choosing a microservice architecture
FactorImpact
Domain complexityIt’s difficult to objectively evaluate the complexity of a domain, but microservices can address complexity in systems driven by competing pressures, such as regulatory requirements and market breadth.
Technical requirementsYou can build different components of a system using different programming languages (and associated technical ecosystems). Microservices enable heterogeneous technical choices.
Organizational growthRapidly growing engineering organizations may benefit from microservices because lowering dependency on existing codebases enables rapid ramp-up and productivity for new engineers.
Team knowledgeMany engineers lack experience in microservices and distributed systems. If the team lacks confidence or knowledge, it may be appropriate to build a proof-of-concept microservice before fully committing to implementation.

Using these factors, you can evaluate whether microservices will help you deliver sustainable value in the face of increasing application complexity.

2.2.1 Risk and inertia in financial software

Let’s take a moment to look at how SimpleBank’s competitors build software. Most banks aren’t ahead of the curve in terms of technological innovation. There’s an element of inertia that’s typical of larger organizations, although that’s not unique to the finance industry. Two primary factors limit innovation and flexibility:

  • Aversion to risk — Financial companies are heavily regulated and tend to build top-down systems of change control to avoid risk by limiting the frequency and impact of software changes.
  • Reliance on complex legacy systems — Most core banking systems were built pre-1970. In addition, mergers, acquisitions, and outsourcing have led to software systems that are poorly integrated and contain substantial technical debt.

But limiting change and relying on existing systems hasn’t prevented software problems from leading to pain for customers or the finance companies themselves. The Royal Bank of Scotland was fined £56 million in 2014 when an outage caused payments to fail for 6.5 million customers. That’s on top of the £250 million it was already spending every year on its IT systems.1 

That approach also hasn’t led to better products. Financial technology startups, such as Monzo and Transferwise, are building features at a pace most banks can only dream of.

2.2.2 Reducing friction and delivering sustainable value

Can you do any better? By any measure, the banking industry is a complex and competitive domain. A bank needs to be both resilient and agile, even when the lifetime of a banking system is measured in decades. The increasing size of a monolithic application is antithetical to this goal. If a bank wants to launch a new product, it shouldn’t be bogged down by the legacy of previous builds2  or require outsize effort and investment to prevent regression in existing functionality.

A well-designed microservice architecture can solve these challenges. As we established earlier, this type of architecture avoids many of the characteristics that, in monolithic applications, slow velocity in development. Individual teams can move forward with increased confidence as

  • Change cycles are decoupled from other teams.
  • Interaction between collaborating components is disciplined.
  • Continuous delivery of small, isolated changes limits the risk of breaking functionality.

These factors reduce friction in the development of a complex system but maintain resiliency. As such, they reduce risk without stifling innovation through bureaucracy.

This isn’t only a short-term solution. Microservices aid engineering teams in delivering sustainable value throughout the lifecycle of an application by placing natural bounds on the conceptual and implementation complexity of individual components.

2.3 Building a new feature

Now that we’ve established that microservices are a good choice for SimpleBank, let’s look at how it might use them to build new features. Building a minimum viable product — an MVP — is a great first step to ensure that a team understands the constraints and requirements of the microservices style. We’ll start by exploring one of the features that SimpleBank needs to build and the design choices the team will make, working through the lifecycle we illustrated in chapter 1 (figure 2.2).

In chapter 1, we touched on how services might collaborate to place a sell order. An overview of this process is shown in figure 2.3.

Let’s look at how you’d approach building this feature. You need to answer several questions:

  • Which services do you need to build?
  • How do those services collaborate with each other?
  • How do you expose their functionality to the world?
c02_02.png

Figure 2.2 The key iterative stages — design, deploy, and observe — in the microservice development lifecycle

c02_03.png

Figure 2.3 The process of placing an order to sell a financial position from an account at SimpleBank

These may be similar to the questions you might ask yourself when designing a feature in a monolithic application, but they have different implications. For example, the effort required to deploy a new service is inherently higher than creating a new module. In scoping microservices, you need to ensure that the benefits of dividing up your system aren’t outweighed by added complexity.

As we discussed earlier, each service should be responsible for a single capability. Your first step will be to identify the distinct business capabilities you want to implement and the relationship between those capabilities.

2.3.1 Identifying microservices by modeling the domain

To identify the business capabilities you want, you need to develop your understanding of the domain where you’re building software. This is normally the hard work of product discovery or business analysis: research; prototyping; and talking to customers, colleagues, or other end users.

Let’s start by exploring the order placement example from figure 2.3. What value are you trying to deliver? At a high level, a customer wants to be able to place an order. So, an obvious business capability will be the ability to store and manage the state of those orders. This is your first microservice candidate.

Continuing our exploration of the example, you can identify other functionalities your application needs to offer. To sell something, you need to own it, so you need some way of representing a customer’s current holdings resulting from the transactions that have occurred against their account. Your system needs to send an order to a broker — the application needs to be able to interact with that third party. In fact, this one feature, placing a sell order, will require SimpleBank’s application to support all of the following functionality:

  • Record the status and history of sell orders
  • Charge fees to the customer for placing an order
  • Record transactions against the customer’s account
  • Place an order onto a market
  • Provide valuation of holdings and order to customer

It’s not a given that each function maps to a single microservice. You need to determine which functions are cohesive — they belong together. For example, transactions resulting from orders will be similar to transactions resulting from other events, such as dividends being paid on a share. Together, a group of functions forms a capability that one service may offer.

Let’s map these functions to business capabilities — what the business does. You can see this mapping in figure 2.4. Some functions cross multiple domains, such as fees.

c02_04.png

Figure 2.4 The relationship between application functionality and capabilities within SimpleBank’s business

You can start by mapping these capabilities directly to microservices. Each service should reflect a capability that the business offers — this results in a good balance of size versus responsibility. You also should consider what would drive a microservice to change in the future — whether it truly has singleresponsibility. For example, you could argue that market execution is a subset of order management and therefore shouldn’t be a separate service. But the drivers for change in that area are the behavior and scope of the markets you’re supporting, whereas order management relates more closely to the types of product and the account being used to trade. These two areas don’t change together. By separating them, you isolate areas of volatility and maximize cohesiveness (figure 2.5).

Some microservice practitioners would argue that microservices should more closely reflect single functions, rather than single capabilities. Some have even suggested that microservices are “append only” and that it’s always better to write new services than to add to existing ones.

We disagree. Decomposing too much can lead to services that lack cohesiveness and tight coupling between closely related collaborators. Likewise, deploying and monitoring many services might be beyond the abilities of the engineering team in the early days of a microservice implementation. A useful rule of thumb is to err on the side of larger services; it’s often easier to carve out functionality later if it becomes more specialized or more clearly belongs in an independent service.

Lastly, keep in mind that understanding your domain isn’t a one-off process! Over time, you’ll continue to iterate on your understanding of the domain; your users’ needs will change, and your product will continue to evolve. As this understanding changes, your system itself will change to meet those needs. Luckily, as we discussed in chapter 1, coping with changing needs and requirements is a strength of the microservices approach.

c02_05.png

Figure 2.5 Services should isolate reasons to change to promote loose coupling and single responsibility.

2.3.2 Service collaboration

We’ve identified several microservice candidates. These services need to collaborate with each other to do something useful for SimpleBank’s customers.

As you may already know, service collaboration can be either point-to-point or event-driven. Point-to-point communication is typically synchronous, whereas event-driven communication is asynchronous. Many microservice applications begin by using synchronous communication. The motivations for doing so are twofold:

  • Synchronous calls are typically simpler and more explicit to reason through than asynchronous interaction. That said, don’t fall into the trap of thinking they share the same characteristics as local, in-process function calls — requests across a network are significantly slower and more unreliable.
  • Most, if not all, programming ecosystems already support a simple, language-agnostic transport mechanism with wide developer mindshare: HTTP, which is mainly used for synchronous calls but you can also use asynchronously.

Consider SimpleBank’s order placement process. The orders service is responsible for recording and placing an order to market. To do this, it needs to interact with your market, fees, and account transactionservices. This collaboration is illustrated in figure 2.6.

c02_06.png

Figure 2.6 The orders service orchestrates the behavior of several other services to place an order to market.

Earlier, we pointed out that microservices should be autonomous, and to achieve that, services should be loosely coupled. You achieve this partly through the design of your services, “[gathering] together the things that change for the same reasons” to minimize the chance that changes to one service require changes to its upstream or downstream collaborators. You also need to consider service contracts and service responsibility.

Service contracts

The messages that each service accepts, and the responses it returns, form a contract between that service and the services that rely on it, which you can call upstream collaborators. Contracts allow each service to be treated as a black box by its collaborators: you send a request and you get something back. If that happens without errors, the service is doing what it’s meant to do.

Although the implementation of a service may change over time, maintaining contract-level compatibility ensures two things:

  1. Those changes are less likely to break consumers.
  2. Dependencies between services are explicitly identifiable and manageable.

In our experience, contracts are often implicit in naïve or early microservice implementations; they’re suggested by documentation and practice, rather than explicitly codified. As the number of services grows, you can realize significant benefit from standardizing the interfaces between them in a machine-readable format. For example, REST APIs may use Swagger/OpenAPI. As well as aiding the conformance testing of individual services, publishing standardized contracts will help engineers within an organization understand how to use available services.

Service responsibility

You can see in figure 2.6 that the orders service has a lot of responsibility. It directly orchestrates the actions of every other service involved in the process of placing an order. This is conceptually simple, but it has downsides. At worst, our other services become anemic, with many dumb services controlled by a small number of smart services, and those smart services grow larger

This approach can lead to tighter coupling. If you want to introduce a new part of this process — let’s say you want to notify a customer’s account manager when a large order is placed — you’re forced to deploy new changes to the orders service. This increases the cost of change. In theory, if the orders service doesn’t need to synchronously confirm the result of an action — only that it’s received a request — then it shouldn’t need to have any knowledge of those downstream actions.

2.3.3 Service choreography

Within a microservice application, services will naturally have differing levels of responsibility. But you should balance orchestration with choreography. In a choreographed system, a service doesn’t need to directly command and trigger actions in other services. Instead, each service owns specific responsibilities, which it performs in reaction to other events.

Let’s revisit the earlier design and make a few tweaks:

  1. When someone creates an order, the market might not currently be open. Therefore, you need to record what status an order is in: created or placed. Placement of an order doesn’t need to be synchronous.
  2. You’ll only charge a fee once an order is placed, so charging fees doesn’t need to be synchronous. In fact, it should happen in reaction to the market service, rather than being orchestrated by the orders service.

Figure 2.7 illustrates the changed design. Adding events adds an architectural concern: you need some way of storing them and exposing them to other applications. We’d recommend using a message queue for that purpose, such as RabbitMQ or SQS.

In this design, we’ve removed the following responsibility from the orders service:

  • Charging fees — The orders service has no awareness that a fee is being charged once an order is being placed to market.
  • Placing orders — The orders service has no direct interaction with the market service. You could easily replace this with a different implementation, or even a service per market, without needing to change the orders service itself.
c02_07.png

Figure 2.7 You choreograph the behavior of other services through events, reducing the coordinating role of the orders service. Note that some actions, for example, the two actions numbered “3.,” happen concurrently.

The orders service itself also reacts to the behavior of other services by subscribing to the OrderPlaced event emitted by the market service. You can easily extend this to further requirements; for example, the orders service might subscribe to TradeExecuted events to record when the sale has been completed on the market or OrderExpired events if the sale can’t be made within a certain timeframe.

This setup is more complex than the original synchronous collaboration. But by favoring choreography where possible, you’ll build services that are highly decoupled and therefore independently deployable and amenable to change. These benefits do come at a cost: a message queue is another piece of infrastructure to manage and scale and itself can become a single point of failure.

The design we’ve come up with also has some benefit in terms of resiliency. For example, failure in the market service is isolated from failure in the orders service. If placing an order fails, you can replay that event3  later, once the service is available, or expire it if too much time passes. On the other hand, it’s now more difficult to trace the full activity of the system, which you’ll need to consider when you think about how to monitor these services in production.

2.4 Exposing services to the world

So far, we’ve explored how services collaborate to achieve some business goal. How do you expose this functionality to a real user application?

SimpleBank wants to build both web and mobile products. To do this, the engineering team have decided to build an API gateway as a façade over these services. This abstracts away backend concerns from the consuming application, ensuring it doesn’t need to have any awareness of underlying microservices, or how those services interact with each other to deliver functionality. An API gateway delegates requests to underlying services and transforms or combines their responses as appropriate to the needs of a public API.

Imagine the user interface of a place order screen. It has four key functions:

  • Displaying information about the current holdings within a customer’s account, including both quantity and value
  • Displaying market data showing prices and market movements for a holding
  • Inputting orders, including cost calculation
  • Requesting execution of those orders against the specified holdings

Figure 2.8 illustrates how an API gateway serves that functionality, and how that gateway collaborates with underlying services.

The API gateway pattern is elegant but has a few downsides. Because it acts as a single composition point for multiple services, it’ll become large and possibly unwieldy. It may be a temptation to add business logic in the gateway, rather than treating it as a proxy alone. It can suffer from trying to be all things to all applications: whereas a mobile customer application may want a smaller, cut-down payload, but an internal administration web application might require significantly more data. It can be hard to balance these competing forces while building a cohesive API.

c02_08.png

Figure 2.8 A user interface, such as a web page or mobile app, interacts with the REST API that an API gateway exposes. The gateway provides a façade over underlying microservices and proxies requests to appropriate backend services.

2.5 Taking your feature to production

You’ve designed a feature for SimpleBank that involves the interaction of multiple services, an event queue, and an API gateway. Let’s say you’ve taken the next step: you’ve built those services and now the CEO is pushing you to get them into production.

In public clouds like AWS, Azure, or GCE, the obvious solution is to deploy each service to a group of virtual machines. You could use load balancers to spread load evenly across instances of each web-facing service, or you could use a managed event queue, such as AWS’s Simple Queue Service, to distribute events between services.

Anyway — you compiled that code, FTP’d it onto those VMs, got the databases up and running, and tried some test requests. This took a few days. Figure 2.9 shows your production infrastructure.

For a few weeks, that didn’t work too badly. You made a few changes and pushed out the new code. But soon you started to run into trouble. It was hard to tell if the services were working as expected. Worse, you were the only person at SimpleBank who knew how to release a new version. Even worse than that, the guy who wrote the transaction service went on vacation for a few weeks, and no one knew how the service was deployed. These services would have a bus factor of 1 — suggesting they wouldn’t survive the disappearance of any team member.

Something was definitely wrong. You remembered that in your last job at GiantBank, the infrastructure team managed releases. You’d log a ticket, argue back and forth, and after a few weeks, you’d have what you needed…or sometimes not, so you’d log another ticket. That doesn’t seem like the right approach either. In fact, you were glad that using microservices allowed you to manage deployment.

c02_09.png

Figure 2.9 In a simple microservices deployment, requests to each service are load balanced across multiple instances, running across multiple virtual machines. Likewise, multiple instances of a service may subscribe to a queue.

It’s safe to say that your services weren’t ready for production. Running microservices requires a level of operational awareness and maturity from an engineering team beyond what’s typical in a monolithic application. You can only say a service is production ready if you can confidently trust it to serve production workloads.

How can you be confident a service is trustworthy? Let’s start with a list of questions you might need to consider to achieve production readiness:

  • Reliability — Is your service available and error free? Can you rely on your deployment process to push out new features without introducing instability or defects?
  • Scalability — Do you understand the resource and capacity needs of a service? How will you maintain responsiveness under load?
  • Transparency — Can you observe a service in operation through logs and metrics? If something goes wrong, is someone notified?
  • Fault tolerance — Have you mitigated single points of failure? How do you cope with the failure of other service dependencies?

At this early stage in the lifetime of a microservice application, you need to establish three fundamentals:

  • Quality-controlled and automated deployments
  • Resilience
  • Transparency

Let’s examine how these fundamentals will help you address the problems that SimpleBank has encountered.

2.5.1 Quality-controlled and automated deployment

You’ll lose the added development speed you gain from microservices if you can’t get them to production rapidly and reliably. The pain of unstable deployments — such as introducing a serious error — will eliminate those speed gains.

Traditional organizations often seek stability by introducing (often bureaucratic) change control and approval processes. They’re designed to manage and limit change. This isn’t an unreasonable impulse: if changes introduce most bugs4  — costing the company thousands (or millions) of dollars of engineering effort and lost revenue — then you should closely control those changes.

In a microservice architecture, this won’t work, because the system will be in a state of continuous evolution; it’s this freedom that gives rise to tangible innovation. But to ensure that freedom doesn’t lead to errors and outages, you need to be able to trust your development process and deployment. Equally, to enable such freedom in the first place, you also need to minimize the effort required to release a new service or change an existing one. You can achieve stability through standardization and automation:

  • You should standardize the development process. You should review code changes, write appropriate tests, and maintain version control of the source code. We hope this doesn’t surprise anyone!
  • You should standardize and automate the deployment process. You should thoroughly validate the delivery of a code change to production, and it should require minimal intervention from an engineer. This is a deployment pipeline.

2.5.2 Resilience

Ensuring a software system is resilient in the face of failure is a complicated task. The infrastructure underpinning your systems is inherently unreliable; even if your code is perfect, network calls will fail and servers will go down. As part of designing a service, you need to consider how it and its dependencies may fail and proactively work to avoid — or minimize the impact of — those failure scenarios.

Table 2.2 examines the potential areas of risk in the system that SimpleBank has deployed. You can see that even a relatively simple microservice application introduces several areas of potential risk and complexity.

Table 2.2 Areas of risk in SimpleBank’s microservice application
AreaPossible failures
HardwareHosts, data center components, physical network
Communication between servicesNetwork, firewall, DNS errors
DependenciesTimeouts, external dependencies, internal failures, for example, supporting databases

2.5.3 Transparency

The behavior and state of a microservice should be observable: at any time, you should be able to determine whether the service is healthy and whether it’s processing its workload in the way you expect. If something affects a key metric — say, orders are taking too long to be placed to market — this should send an actionable alert to the engineering team.

We’ll illustrate this with an example. Last week, there was an outage at SimpleBank. A customer called and told you she was unable to submit orders. Quick investigation turned up that this was affecting every customer: requests made to the order creation service were timing out. Figure 2.10 illustrates the possible points of failure within that service.

c02_10.png

Figure 2.10 A service timeout may be due to several underlying reasons: network issues, problems with service-internal dependencies — such as databases — or unhealthy behavior from other services.

It was clear that you had a major operational problem: you lacked logging to determine exactly what went wrong and where things were falling apart. Through manual testing, you managed to isolate the problem: the account transaction service was unresponsive. Meanwhile, your customers had been unable to place orders for several hours. They weren’t happy.

To avoid such problems in the future, you need to add thorough instrumentation to your microservices. Collecting data about application activity — at all layers — is vital to understanding the present and past operational behavior of a microservice application.

As a first step, SimpleBank set up infrastructure to aggregate the basic logs that your services produced, sending them to a service that allowed you to tag and search them.5  Figure 2.11 illustrates this approach. By doing this, the next time a service failed, the engineering team could use those logs to identify the point where the system began to fail and diagnose the issue precisely where it occurred.

But inadequate logging wasn’t the only problem. It was embarrassing that SimpleBank only identified an issue once a customer called. The company should have had alerting in place to ensure that each service was meeting its responsibilities and service goals.

In such cases, in its most simple form, you should have a recurring heartbeat check that happening on each service to alert the team if a service becomes completely unresponsive. Beyond that, a team should commit to operational guarantees for each service. For example, for a critical service, you might aim for 95% of requests to return in under 100ms with 99.99% uptime. Failing to meet these thresholds should result in alerts being sent to the service owners.

c02_11.png

Figure 2.11 You install a logging collection agent on each instance. This ships application log data to a central repository where you can index, search, and analyze it further.

Building thorough monitoring for a microservice application is a complex task. The depth of monitoring you apply will evolve as your system increases in complexity and number of services. As well as the operational metrics and logging we’ve described, a mature microservice monitoring solution will address business metrics, interservice tracing, and infrastructure metrics. If you are to trust your services, you need to constantly work at making sense of that data.

2.6 Scaling up microservice development

The technical flexibility of microservices is a blessing for the speed of development and the effective scalability of a system. But that same flexibility also leads to organizational challenges that change the nature of how an engineering team works at scale. You’ll quickly encounter two challenges: technical divergence and isolation.

2.6.1 Technical divergence

Imagine SimpleBank has built a large microservice system of say 1,000 services. A small team of engineers owns each service, with each team using their preferred languages, their favorite tools, their own deployment scripts, their favored design principles, their preferred external libraries,6  and so on.

Take a moment to recoil in terror at the sheer weight of effort involved in maintaining and supporting so many different approaches. Although microservices make it possible to choose different languages and frameworks for different services, it’s easy to see that without choosing reasonable standards and limits, the system will become an unimaginable and fragile sprawl.

It’s easy to see this frustration emerge on a smaller scale. Consider two services — account transactions and orders — that two different teams own. The first service produces well-structured log output for every request, including helpful diagnostic information such as timings, a request ID, and the currently released revision ID:

service=api
git_commit=d670460b4b4aece5915caf5c68d12f560a9fe3e4
request_id=55f10e07-ec6c
request_ip=1.2.3.4
request_path=/users
response_status=500
error_id=a323-da321
parameters={ id: 1 }
user_id=123
timing_total_ms=223

The second service produces anemic messages in a difficult to parse format:

Processed /users in 223ms with response 500

You can see that even in this simple example of log message format, consistency and standardization would make it easier to adequately diagnose issues and trace requests across multiple services. It’s crucial to agree on reasonable standards at all layers of your microservice system to manage divergence and sprawl.

2.6.2 Isolation

In chapter 1, we mentioned Conway’s Law. In an organization that works with microservices, the inverse of this law is likely to be true: the structure of the company is determined by the architecture of its product.

This suggests that development teams will increasingly reflect microservices: they’ll be highly specialized to do one thing well. Each team will own and be accountable for several closely related microservices. Taken collectively, the developers will know everything there is to know about a system, but individually they’ll have a narrow area of specialization. As SimpleBank’s customer base and product complexity grow, this specialization will deepen.

This configuration can be immensely challenging. Microservices have limited value by themselves and don’t function in isolation. Therefore, these independent teams must collaborate closely to build an application that runs seamlessly, even though their goals as a team likely relate to their own narrower area of ownership. Likewise, a narrow focus may tempt a team to optimize for their local problems and preferences, rather than the needs of the whole organization. At its worst, this could lead to conflict between teams, in turn leading to slower deployment and a less reliable product.

2.7 What’s next?

In this chapter, we established that microservices were a good fit for SimpleBank, designed a new feature, and considered how you might make that feature production ready. We hope this case study has shown that a microservice-driven approach to application development is both compelling and challenging!

Throughout the rest of this book, we’ll teach you the techniques and tools you need to know to run a great microservice application. Although microservices can lead to both flexible and highly productive development, running multiple distributed services is much more demanding than running a single application. To avoid instability, you need be able to design and deploy services that are production ready: transparent, fault-tolerant, reliable, and scalable.

In part 2, we’ll focus on design. Effectively designing a system of distributed, interdependent services requires careful consideration of your system domain and how those services interact. Being able to identify the right boundaries between responsibilities — and therefore build highly cohesive and loosely coupled services — is one of the most valuable skills for any microservice practitioner.

Summary

  • Microservices are highly applicable in systems with multiple dimensions of complexity — for example, breadth of product offering, global deployment, and regulatory pressures.
  • It’s crucial to understand the product domain when designing microservices.
  • Service interactions may be orchestrated or choreographed. The latter adds complexity but can lead to a more loosely coupled system.
  • API gateways are a common pattern for abstracting away the complexity of a microservice architecture for front-end or external consumers.
  • You can say a service is production ready if you can trust it to serve production workloads.
  • You can be more confident in a service if you can reliably deploy and monitor it.
  • Service monitoring should include log aggregation and service-level health checks.
  • Microservices can fail because of problems with hardware, communication, and dependencies, not just defects in code.
  • Collecting business metrics, logs, and interservice traces is vital to understanding the present and past operational behavior of a microservice application.
  • Technical divergence and isolation will become increasingly challenging for an engineering organization as the number of microservices (and supporting teams) increases.
  • Avoiding divergence and isolation requires standards and best practices to be similar across multiple teams, regardless of technical underpinnings.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset