The microservices architecture expands the attack surface with multiple microservices communicating with each other remotely. Instead of having one or two entry points, now we have hundreds of entry points to worry about. It’s a common principle in security that the strength of a given system is only as strong as the strength of its weakest link. The more entry points we have, the broader the attack surface, and the higher the risk of being attacked. Unlike in a monolithic application, the depth and breadth we need to worry about in securing a microservice is much higher. There are multiple perspectives in securing microservices: secure development lifecycle and test automation, security in DevOps, and application level security.
Note
In 2010, it was discovered that since 2006, a gang of robbers equipped with a powerful vacuum cleaner had stolen more than 600,000 euros from the Monoprix supermarket chain in France. The most interesting thing was the way they did it. They found out the weakest link in the system and attacked it. To transfer money directly into the store’s cash coffers, cashiers slid tubes filled with money through pneumatic suction pipes. The robbers realized that it was sufficient to drill a hole in the pipe near the trunk and then connect a vacuum cleaner to capture the money. They didn’t have to deal with the coffer shield.
The key driving force behind the microservices architecture is the speed to production (or the time to market). One should be able to introduce a change to a service, test it, and instantly deploy it into production. A proper secure development lifecycle and test automation strategy needs to be there to make sure that we do not introduce security vulnerabilities at the code level. We need to have a proper plan for static code analysis and dynamic testing — and most importantly those tests should be part of the continuous delivery (CD) process. Any vulnerability should be identified early in the development lifecycle and should have shorter feedback cycles.
There are multiple microservices deployment patterns — but the most commonly used one is service-per-host model. The host does not necessarily mean a physical machine — most probably it would be a container (Docker). The DevOps security needs to worry about container-level security. How do we isolate a container from other containers and what level of isolation we have between the container and the host operating system? Apart from containers, Kubernetes as a container orchestration platform introduced another level of isolation, in the form of a pod. Now we need to worry about securing the communication between containers as well as between pods. In Chapter 8, “Deploying and Running Microservices,” we discussed containers and security in detail. Another important pattern with respect to microservices deployment is the Service Mesh , which we discussed in detail in Chapter 9, “Service Mesh”. In a typical containerized deployment in Kubernetes, the communication between pods always happens through a Service Mesh, to be precise, through the Service Mesh proxy. The Service Mesh proxy is now responsible for applying and enforcing security between two microservices.
How do we authenticate and access control users to microservices and how do we secure the communication channels between microservices? All this falls under application-level security. This chapter covers security fundamentals with a set of patterns to address the challenges we face in securing microservices at the application level. What does it mean to secure a microservice? How does securing a microservice differ from securing any other service? What’s so special about microservices? All these questions will be addressed in this chapter. Security at the development time and in the deployment process (with Docker and Kubernetes) is out of the scope of this book. We encourage the readers who are keen on knowing all the aspects in microservices security to refer to a book specifically focusing on microservices security.
Monolith versus Microservices
In a Java EE environment, the interceptor can be a servlet filter. This servlet filter will intercept all the requests coming to its registered contexts and will enforce authentication. The service invoker should either carry valid credentials or a session token that can be mapped to a user. Once the servlet filter finds the user, it can create a login context and pass it to the downstream components. Each downstream component can identify the user from the login context to do any authorization.
The challenge here is how we authenticate the user and then pass the login context between microservices in a symmetric manner, and then how does each microservice authenticate to each other and authorize the user. The following section explains the different techniques to secure service-to-service communication in the microservices architecture, both for authentication and authorization, and also to propagate user context across different microservices.
Securing Service-to-Service Communication
Service-to-service communication can happen synchronously via HTTP or asynchronously via event-driven messaging. In Chapter 3, “Inter-Service Communication,” we discussed both synchronous and asynchronous messaging between microservices. There are two common approaches to secure service-to-service communication. One is based on JSON Web Token (JWT) and the other is based on Transport Layer Security (TLS) mutual authentication. In the following section, we look at the role of JWT in securing service-to-service communication in the microservices architecture.
JSON Web Token (JWT)
Propagate one’s identity information between interested parties. For example, user attributes such as first name, last name, email address, phone number, etc.
Propagate one’s entitlements between interested parties. The entitlements define what the user is capable of doing at a target system.
Transfer data securely between interested parties over an unsecured channel. A JWT can be used to transfer signed and/or encrypted messages.
Assert one’s identity, given that the recipient of the JWT trusts the asserting party (the token issuer). For example, the issuer of a JWT can sign the payload with its private key, which makes it protected for integrity, so that no one in the middle can alter the message. The recipient can validate the signature of the JWT, by verifying it with the corresponding public key of the issuer. If the recipient trusts the public key known to it, it also trusts the issuer of the JWT.
A JWT can be signed or encrypted or both. A signed JWT is known as a JWS2 (JSON Web Signature) and an encrypted JWT is known as a JWE3 (JSON Web Encryption) . In fact a JWT does not exist itself — either it has to be a JWS or a JWE. It’s like an abstract class — the JWS and JWE are the concrete implementations. Let me be little precise here. JWS and JWE have a broader meaning beyond JWT. JWS defines how to serialize (or represent) any signed message payload; it can be JSON, XML, or can be in any format. In the same way, JWE defines how to serialize an encrypted payload. Both JWS and JWE support two types of serializations: compact serialization and JSON serialization. We call a JWS or JWE, a JWT only if it follows the compact serialization. Any JWT must follow compact serialization. In other words a JWS or JWE token, which follows JSON serialization, cannot be called a JWT.
Note
Further details on JWS and JWE are out of the scope of this book. Any interested readers who want a detailed walk-through of JWS and JWE should refer to this blog: “JWT, JWS and JWE for Not So Dummies!” at https://medium.facilelogin.com/jwt-jws-and-jwe-for-not-so-dummies-b63310d201a3 .
This first part (once parted by the periods) of the JWT is known as the JOSE header. JOSE stands for JavaScript Object Signing and Encryption—and it’s the name of the IETF working group4 that works on standardizing the representation of integrity-protected data using JSON data structures.
The JOSE header indicates that it’s a signed message with the provided algorithm under the alg parameter. The token issuer asserts the identity of the end user by signing the JWT, which carries data related to the user’s identity. Both the alg and kid elements there are not defined in the JWT specification, but in the JSON Web Signature (JWS) specification. The JWT specification only defines two elements (typ and cty) in the JOSE header and both the JWS and JWE specifications extend it to add more appropriate elements.
Note
Both JWS and JWE compact serialization use base64url encoding. It is a slight variation of the popular base64 encoding. The base64 encoding defines how to represent binary data in an ASCII string format. Its objective is to transmit binary data such as keys or digital certificates in a printable format. This type of encoding is needed if these objects are transported as part of an email body, a web page, an XML document, or a JSON document.
To do base64 encoding, first the binary data is grouped into 24-bit groups. Then each 24-bit group is divided into four 6-bit groups. A printable character can represent each 6-bit group based on its bit value in decimal. For example, the decimal value of the 6-bit group 000111 is 7. As per Figure 11-5 the character H represents this 6-bit group. Apart from the characters shown in Figure 11-5, the character = is used to specify a special processing function, which is to pad. If the length of the original binary data is not an exact multiple of 24, then we need padding. Let’s say the length is 232, which is not a multiple of 24. Now we need to pad this binary data to make its length equal to the very next multiple of the 24, which is 240. In other words, we need to pad this binary data by 8 to make its length 240. In this case, padding is done by adding eight 0s to the end of the binary data. Now, when we divide 240 bits by 6 to build 6-bit groups, the last 6-bit group will be all zeros, and this complete group will be represented by the padding character =.
One issue with base64 encoding is that it does not work quite well with URLs. The + and / characters in base64 encoding (see Figure 11-5) have a special meaning when used within a URL. If we try to send a base64 encoded image as a URL query parameter and if the base64 encoded string carries either of these two characters, the browser will interpret the URL incorrectly. The base64url encoding was introduced to address this problem. It works exactly the same as base64 encoding other than two exceptions: the character - is used in base64url encoding instead of the character + and the character _ is used in base64url encoding instead of the character /.
The JWT claim set represents a JSON object whose members are the claims asserted by the JWT issuer. Each claim name within a JWT must be unique. If there are duplicate claim names, then the JWT parser will either return a parsing error or just return the claims set with the very last duplicate claim. JWT specification does not explicitly define which claims are mandatory and which are optional. It’s up to the each application of JWT to define mandatory and optional claims. For example, the OpenID Connect specification defines the mandatory and optional claims. According to the OpenID Connect core specification, iss, sub, add, exp, and iat are treated as mandatory elements, while auth_time, nonce, acr, amr, and azp are optional elements. In addition to the mandatory and optional claims, which are defined in the specification, the token issuer can include additional elements into the JWT claim set.
The third part of the JWT (shown in Figure 11-4) is the signature, which is also base64url encoded. The cryptographic elements related to the signature are defined in the JOSE header. In this particular example, the token issued uses RSASSA-PKCS1-V1_5 with the SHA-256 hashing algorithm, which is expressed by the value of the alg element in the JOSE header: RS256. The signature is calculated against the first two parts in the JWS—the JOSE header and the JWT claim set.
Propagating Trust and User Identity
The user context from one microservice to another can be passed along with a JWS (see Figure 11-7). Since a key known to the calling microservice signs the JWS, it will carry both the end user identity (as claimed in the JWT) and the identity of the calling microservice (via the signature). In other words, the calling microservice itself is the issuer of the JWS. To accept the JWS, the recipient microservice first needs to validate the signature of the JWS against the public key embedded in the JWS itself or retrieved via any other mechanism. That’s not just enough — then it needs to check whether it can trust that key or not. Trust between microservices can be established in multiple ways. One way is to provision the trusted certificates, by service, to each microservice. It’s a no brainer to realize that this would not scale in a microservices deployment. The approach we would like to suggest is to build a private certificate authority (CA) and use intermediate certificate authorities by different microservices teams, if the need arises. Now, rather than trusting every individual certificate, the recipient microservices will only trust either the root certificate authority or an intermediary. That will vastly reduce the overhead in certificate provisioning.
Note
Trust bootstrap is a harder problem to solve. The Secure Production Identity Framework For Everyone (SPIFFE)5 project builds an interesting solution around this, which can be used to bootstrap trust between different nodes in a microservices deployment. With SPIFFE, each node will get an identifier and a key pair, which can be used to authenticate to other nodes it communicates with.
In the JWT, the public key corresponding to the key used to sign the token represents the caller (or the calling microservice). How does the recipient microservice find the end user information? The JWT carries a parameter called sub in its claim set, which represents the subject or the user who owns the JWT. If any microservice needs to identify the user during its operations, this is the attribute it should look into. The value of the sub attribute is unique only for a given issuer. If you have a microservice, which accepts tokens from multiple issuers, then the uniqueness of the user should be decided as a combination of the issuer and the sub attribute. In addition to the subject identifier, the JWT can also carry user attributes such as first_name, last_name, email, and so on.
Note
When we pass user context between microservices via a JWT, each microservice has to bear the cost of JWT validation, which also includes a cryptographic operation to validate the token signature. Caching the JWT at the microservices level against the data extracted out of it would reduce the impact of repetitive token validation. The cache expiration time must match the JWT expiration time. Once again, the impact of caching would be quite low if the JWT expiration time is quite low.
When issuing a JWT from a token issuer, it has to be issued to a given audience. The audience is the consumer of the token. For example, if the microservice foo wants to talk to the microservice bar, then the token is issued by foo (or a third party issuer), and the audience of the token is bar. The aud parameter in the JWT claim set specifies the intended audience of the token. It can be a single recipient or a set of recipients. Prior to any validation check, the token recipient must first see whether the particular JWT is issued for its use. If not, it should reject it immediately. The token issuer should know, prior to issuing the token, who the intended recipient (or the recipients) of the token is. The value of the aud parameter must be a pre-agreed value between the token issuer and the recipients. In a microservices environment, we can use a regular expression to validate the audience of the token. For example, the value of the aud in the token can be *.facilelogin.com, while each recipient under the facilelogin.com domain can have its own aud values: foo.facilelogin.com, bar.facilelogin.com, and so on.
Transport Layer Security (TLS) Mutual Authentication
Transport Layer Security (TLS) mutual authentication, also known as client authentication or two-way Secure Socket Layer (SSL), is part of the TLS handshake process. In one-way TLS, only the server proves its identity to the client; this is mostly used in e-commerce to win consumer confidence by guaranteeing the legitimacy of the e-commerce vendor. In contrast, mutual authentication authenticates both parties—the client and the server. In a microservices environment, TLS mutual authentication can be used between microservices to authenticate each other.
Both in TLS mutual authentication and with the JWT-based approach, each microservice needs to have its own certificates. The difference between the two approaches is that, in JWT-based authentication, the JWS can carry the end user identity as well as the upstream service identity. With TLS mutual authentication, the end user identity has to be passed at the application level—probably as an HTTP header.
Certificate Revocation
Both in TLS mutual authentication and with the JWT based approach, the certificate revocation is bit tricky. It is a harder problem to solve — though there are multiple options available: CRL (Certification Revocation List/RFC 2459), OCSP (Online Certificate Status Protocol / RFC 2560), OCSP Stapling (RFC 6066), and OCSP Stapling Required.
With CRL, the certificate authority (CA) has to maintain a list of revoked certificates. The client who initiates the TLS handshake has to get the long list of revoked certificates from the corresponding certificate authority and then check whether the server certificate is in the revoked certificate list. Instead of doing that for each request, the client can cache the CRL locally. Then you run into the problem that the security decisions are made based on stale data. When TLS mutual authentication is used, the server also has to do the same certificate verification against the client. The CRL is a not more often used technique. Eventually people recognized that CRLs are not going to work and started building something new, which is the OCSP.
In the OCSP world, the things are little bit better than CRL. The TLS client can check the status of a specific certificate without downloading the whole list of revoked certificates from the certificate authority. In other words, each time the client talks to a new downstream microservice, it has to talk to the corresponding OCSP responder6 to validate the status of the server (or the service) certificate — and the server has to do the same against the client certificate. That creates some extensive traffic on the OCSP responder. Once again clients still can cache the OCSP decision, but then again it will lead to the same old problem of making decisions on stale data.
With OCSP stapling , the client does not need to go to the OCSP responder each time it talks to a downstream microservice. The downstream microservice will get the OCSP response from the corresponding OCSP responder and staple or attach the response to the certificate. Since the corresponding certificate authority signs the OCSP response, the client can accept it by validating the signature. This makes things little better. Instead of the client, the service has to talk to the OCSP responder. But in a mutual TLS authentication model, this won’t bring any additional benefits when compared to the plain OCSP.
With OCSP must stapling, the service (downstream microservice) gives a guarantee to the client (upstream microservice) that the OCSP response is attached to the service certificate it receives during the TLS handshake. In case the OCSP response is not attached to the certificate , rather than doing a soft failure, the client must immediately reject the connection.
Short-Lived Certificates
From the end user perspective the short-lived certificates behave the same way as the normal certificates work today, they just have a very short expiration. The TLS client needs not to worry about doing CRL or OCSP validations against short-lived certificates and rather sticks to the expiration time, stamped on the certificate itself.
The Edge Security
In Chapter 7, “Integrating Microservices” and Chapter 10, “APIs, Events, and Streams,” we discussed different techniques for exposing microservices to the rest of the world. One common approach discussed there was to use the API gateway pattern . With the API gateway pattern (see Figure 11-9) — the microservices, which need to be exposed outside, would have a corresponding API in the API gateway. Not all the microservices need to be exposed from the API gateway.
OAuth 2.0
OAuth 2.0 is a framework for access delegation. It lets someone do something on behalf of someone else. There are four main characters in an OAuth 2.0 flow: the client, the authorization server, the resource server, and the resource owner. Let’s say you build a web application that lets users export their Flickr photos into it. In that case, your web application has to access the Flickr API to export photos on behalf of the users who actually own the photos. There the web application is the OAuth 2.0 client, Flickr is the resource server (which holds its users’ photos), and the Flickr user who wants to export photos to the web application is the resource owner. For your application to access Flickr API on behalf of the Flickr user, it needs some sort of an authorization grant. The authorization server issues the authorization grant, and in this case it’ll be Flickr itself. But, in practice there can be many cases where the authorization server and the resource server are two different entities. OAuth 2.0 does not couple those two together.
Note
An OAuth 2.0 client can be a web application, a native mobile application, a single page application, or even a desktop application. Whoever the client is, it should be known to the authorization server. Each OAuth client has an identifier, which is known as a client ID, given to it by the authorization server. Whenever a client communicates with authorization server, it has to pass its client ID. In some cases, client has to use some kind of credentials to prove who it is. The most popular form of credentials is the client secret. It is like a password. But it is always recommended that OAuth clients use stronger credentials, like certificates or JWTs.
OAuth 2.0 introduces multiple grant types. A grant type in OAuth 2.0 explains the protocol; the client should get the resource owner’s consent to access a resource on his behalf. Also, there are some grant types that define a protocol to get a token, just on behalf of himself (client_credentials) — in other words, the client is also the resource owner. Figure 11-10 illustrates OAuth 2.0 protocol at a very high-level. It describes the interactions between the OAuth client, the resource owner, the authorization server, and the resource server.
Note
The OAuth 2.0 core specification (RFC 67497) defines five grant types: authorization code, implicit, password, client credentials, and refresh. The authorization code grant type is the most popular grant type, used by more than 70% of the web applications. In fact, it is the recommended grant type for many of the use cases, whether you have a web application, a native mobile application, or even a single page application (SPA).
Whoever wants to access a microservice via the API gateway must get a valid OAuth token first (see Figure 11-11). A system can access a microservice, just by being itself — or on behalf of another user. For the latter case, an example would be when a user logs in to a web application and the web application accesses a microservice on behalf of the user who logged in. When a system wants to access an API on behalf of another user, authorization code is the recommended grant type. In other cases where a system accesses an API by being itself, we can use the client credentials grant type.
Note
There are two types of OAuth 2.0 access tokens: reference tokens and self-contained tokens. A reference token is an arbitrary string issued by the authorization server to the client application to be used against a resource server. It must have a proper length and should be unpredictable. Whenever a resource server sees a reference access token, it has to talk to the corresponding authorization server to validate it. A self-contained access token is a signed JWT (or a JWS). To validate a self-contained access token, the resource server does not need to talk to the authorization server. It can validate the token by validating its signature.
- 1.
The user logs into the web app/mobile app via the identity provider, which the web app/mobile app trusts via OpenID Connect (this can be SAML 2.0 too). OpenID Connect is an identity federation protocol built on top of OAuth 2.0. SAML 2.0 is another similar identity federation protocol.
- 2.
The web app gets an OAuth 2.0 access_token, a refresh_token, and an id_token. The id_token will identify the end user to the web app. OpenID Connect introduces the id_token to the OAuth flow. If SAML 2.0 is used, then the web app needs to talk to the token endpoint of the OAuth authorization server it trusts and exchange the SAML token to an OAuth access_token, following the SAML 2.0 grant type for OAuth 2.0. Each access_token has an expiration, and when the access_token is expired or close to expiration, the OAuth client can use the refresh_token to talk to the authorization (no need to have the end user) and get a new access_token.
- 3.
The web app invokes an API on behalf of the end user — passing the access_token along with the API request.
- 4.
API gateway intercepts the request from the web app, extracts the access_token, and talks to the Token Exchange endpoint (or the STS), which will validate the access_token and then issue a JWT (signed by it) to the API gateway. This JWT will also carry the user context. While STS validating the access_token it will talk to the corresponding OAuth authorization server via an API (Introspection API as defined by the RFC 76628).
- 5.
The API gateway will pass through the JWT along with the request to the downstream microservices.
- 6.
Each microservice will validate the JWT it receives and then, for the downstream service calls, it can create a new JWT signed by itself and send it along with the request. Another approach is to use a nested JWT — so the new JWT will also carry the previous JWT. Also, there is third approach, where each microservice talks to the security token service and exchanges the token it got for a new token, to talk to the other downstream microservices.
Note
A detailed explanation of OAuth 2.0 and OpenID Connect is out of the scope of this book. We encourage interested readers to go through the book, Advanced API Security, written by one of the authors of this book and published by Apress.
With this approach , only the API calls coming from the external clients will go through the API gateway. When one microservice talks to another , that needs not to go through the gateway. Also , from a given microservice perspective, whether you get a request from an external client or another microservice, what you get is a JWT — so this is a symmetric security model.
Access Control
XACML (eXtensible Access Control Markup Language) provides another way of defining access control policies. It is so far the only standard (by OASIS) out there for a policy language. The following section delves deeply into XACML.
XACML (eXtensible Access Control Markup Language)
XACML is the de-facto standard for fine-grained access control. It introduces a way to represent the required set of permissions to access a resource, in a very fine-grained manner in an XML-based domain-specific language (DSL).
XACML provides a reference architecture, a request response protocol, and a policy language. Under the reference architecture, it talks about a Policy Administration Point (PAP), a Policy Decision Point (PDP), a Policy Enforcement Point (PEP), and a Policy Information Point (PIP). This is a highly distributed architecture in which none of the components is tightly coupled. The PAP is the place where you author policies. The PDP is the place where policies are evaluated and decisions are made. While evaluating policies, if there is any missing information that can’t be derived from the XACML request, the PDP calls the PIP. The role of the PIP is to feed the PDP any missing information, which can be user attributes or any other required details. The policy is enforced through a PEP, which sits between the client and the service and intercepts all requests. From the client request, it extracts certain attributes such as the subject, the resource, and the action; then it builds a standard XACML request and calls the PDP. Then it gets a XACML response from the PDP. That is defined under the XACML request/response model. The XACML policy language defines a schema to create XACML policies for access control.
For example, it can include the subject identifier, the resource identifier, and the action the given subject is going to perform on the resource. The microservice that needs to authorize the user has to build a XACML request by extracting the relevant attributes from the JWT and talk to the PDP. The PIP (Policy Information Point) comes into the picture when the PDP finds that certain attributes required for policy evaluation are missing in the XACML request. Then the PDP will talk to the PIP to find the missing attributes. The PIP can connect to relevant datastores, find the attributes, and then feed those into the PDP.
Note
XACML is an XML-based open standard for policy-based access control developed under the OASIS XACML technical committee. The latest XACML 3.0 specification was standardized in January 2013. See www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml .
Note
With the increasing popularity and adaptation of APIs, it becomes crucial for XACML to be easily understood in order to increase the likelihood that it will be adopted. XML is often considered too verbose. Developers increasingly prefer a lighter representation using JSON, the JavaScript Object notation. The profile “Request / Response Interface based on JSON and HTTP for XACML 3.0” aims at defining a JSON format for the XACML request and response.
See https://www.oasis-open.org/committees/document.php?document_id=47775 .
Embedded PDP
Performance cost : Each time we need to do an access control check, the corresponding microservice has to talk to the PDP over the wire. With decision caching at the client side, the transport cost and the cost of the policy evaluation can be cut down. But with caching, we will make security decisions based on stale data.
The ownership of policy information points (PIP) : Each microservice should have the ownership of its PIPs, which know where to bring in the data required to do the access controlling. With this approach we are building a centralized PDP, which has all the PIPs corresponding to all the microservices.
Monolithic PDP : The centralized PDP becomes another monolithic application. All the policies associated with all the microservices are stored centrally in the monolithic PDP. Introducing changes is hard as a change to one policy may have an impact on all the policies, since all the policies are evaluated under the same policy engine.
This approach does not violate the immutable server concept in microservices. Immutable server means that you build servers or containers directly out of configuration loaded from a repository at the end of the continuous delivery process and you should be able to build the same container again and again with the same configuration. So , we would not expect anyone to log in to a server and do any configuration changes there. With the embedded PDP model — even though the server loads the corresponding policies while it’s running — if we spin up a new container it too gets the same set of policies from the corresponding PAP.
There is an important question that’s still unanswered. What is the role of the API gateway under the context of authorization? We can have two levels of policy enforcements. One is globally for all the requests going through the API gateway (which will act as the policy enforcement point or the PEP), and the other one is at the service level. The service-level policies must be enforced via some kind of an interceptor at the container or the service level.
Security Sidecar
Note
As discussed in Chapter 9, Istio11 introduces a sidecar proxy for supporting strong identity, powerful policy, transparent TLS encryption, and authentication, authorization, and audit (AAA) tools to protect your services and data. Istio Citadel provides strong service-to-service and end-user authentication with built-in identity and credential management, based on the sidecar and control plane architecture.
The microservice implementation does not need to worry about the internals of the security implementation and those need not to be in the same programming language.
When implemented as a sidecar, the security functionality can be reused by other microservices, without worrying about individual implementation details.
The ownership of the security sidecar can be taken care by a different team that has expertise on the security domain, other than the microservices developers, who worry about domain-specific business functionalities.
Summary
In this chapter, we discussed the common patterns and fundamentals related to securing microservices. Securing service-to-service communication is the most critical part in securing microservices, where we have two options with JWT and certificates. Edge security is mostly handled by the API gateway with OAuth 2.0. There are two models in access controlling microservices: centralized PDP and embedded PDP. Toward the end of the chapter, we also discussed the value of a security sidecar in the microservice architecture. Understanding the fundamentals of securing microservices is a key to build a production-ready microservices deployment. In the next chapter, we discuss how to implement security for microservices using Spring Boot.