Server-side service discovery

As already described, services are deployed in clusters, and clients generally do not (and should not) care about which specific instance of the service is honoring the request. In a server-side service discovery architecture, such a cluster is fronted by a Load Balancer (LB), which takes a request and routes it to an appropriate service instance, as shown in the following diagram:

Generally the LB has a set of Virtual IP Addresses (VIPs), one for each service. This is a collective network (IP layer) endpoint for the service. There is a static list of backend instances against this VIP, and the LB multiplexes the requests from clients onto the set of the backend instances. Even though this list is static, there are various mechanisms to enable automatic reconfiguration, for example, when instances come up and go.

A popular open source LB is NGINX (https://www.nginx.com/). It is designed for high performance and extensibility. It consists of a limited set of worker processes (usually one per CPU core) that route requests using non-blocking event-driven I/O (using the non-blocking provisions of the native kernel such as epoll and kqueue). The following diagram depicts the architecture of a worker process, and more information can be found at https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/)

The NGINX configuration of backend instances is static, but a dynamic configuration can be built in using ancillary components, such as the Consul Template. Essentially, these solutions watch for events about new or dead instances, rewrite the NGINX configuration file, and gracefully restart the NGINX processes.

For connecting to a service, clients usually start with the advertised URL, which is then converted by a Domain Name Service (DNS) to the VIP of the service. Clients then use this VIP and the advertised service port to initiate the connection.

The LB also frequently hosts a health-check functionality, to figure out the right set of backend instances. Instances that don't periodically check-in are declared unhealthy and removed from the VIP's backend instance set.

In some cases it might be necessary for a client to continue interactions with a specific service instance during the course of a user session. One of the reasons for doing this might be performance (the state needed for responding to requests might be cached at that instance). This feature is supported by most LBs using sticky sessions. Here, the clients pass in an identifier (frequently a cookie), which the LB uses for routing, instead of the default random routing method.

There are a few key advantages of server-side service discovery, including the following:

  • Clients don't need to know about the service instances
  • High availability and fault tolerance is easily enabled

There are few disadvantages:

  • The LB can be a Single Point of Failure (SPOF) and needs engineering for resiliency
  • The clients cannot choose a specific service instance, if for some reason they feel this will be better

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset