Redundant core services

Making key services and applications that are consumed by the rest of the IT stack redundant is another important design facet of cloud-native architectures. The enterprise has a lot of legacy services that are only just beginning to adapt to the cloud native model. When migrating to the cloud, it is critical to re-architect or refactor these applications to dependably run in the new environment.

An example of this is the Active Directory (AD), a critical component for enabling productivity for almost every enterprise. AD maintains and grants access to people, machines, and services by authenticating and authorizing their actions.

There are varying levels of cloud native maturity for standing up AD services. At one end of the spectrum, companies simply extend their network to the cloud and utilize the same AD infrastructure that exists on-premise. This represents the least performant pattern and minimizes the benefits of what the cloud has to offer. In a more advanced pattern, architects extend the forest to the cloud by deploying domain controllers (DC) or read-only domain controllers (RODCs) in the cloud network environment. This provides higher levels of redundancy and performance. For a truly cloud-native AD deployment, leading cloud platforms now provide native services that stand up fully managed AD deployments that run at scale and cost fractions of maintaining your own physical or virtual infrastructure. AWS provides several variants (AWS Directory Service, Managed Microsoft AD, and AD Connector), while Microsoft provides Azure AD and Google Cloud has Directory Sync. The following diagram shows three different active identity options, ranging from least (Option #1) to most performant (Option #3), based on where the domain controllers are hosted.

A range of AD deployments are demonstrated in the following diagram:

Figure 5.7

The first option is to rely on connectivity to the on-premise AD server for AD authentication and authorization. The second option is to deploy a DC or RODC in the cloud environment. The most cloud-native method is to offload this undifferentiated heavy lifting to a native cloud service. The cloud service deploys and manages the AD infrastructure, relinquishing the cloud consumer of these menial tasks. This option results in substantially better performance and availability. It is deployed across multiple AZs automatically, provides scale-out options with a few clicks to increase the number of Domain Controllers, and supports the advanced features needed from an on-premise AD deployment (such as single sign-on, group-based policies, and backups).

In addition to this, the cloud AD services natively integrate with other cloud services, easing the burden during the VM deployment process. VMs can be very easily added to domains, and access to the cloud platform itself can be federated based on identities and users in the AD. It should also be mentioned that these cloud services are generally (if not fully) supportive of Lightweight Directory Access Protocol (LDAP), meaning alternatives to Microsoft AD such as OpenLDAP, Samba, and Apache Directory can be used.

Domain Name System (DNS) follows a similar pattern to AD, with varying levels of cloud nativity based on the deployment model. For the highest level of availability and scalability, utilize leading cloud services such as AWS Route53, Azure DNS, and Google Cloud DNS (which are discussed later on in this book). Hosted zones (public and private) supported by these internal services give the best performance per unit cost. Aside from AD and DNS, other centralized services include AV, IPS, IDS, and logging. These topics will be covered more in-depth in Chapter 6, Security and Reliability.

Apart from the core services an enterprise may consume, most large-scale application stacks require middleware or queues to help manage the large and complex traffic flows through the stack. As you might suspect, there are two major patterns an architect can use for a deployment. The first is to manage their own queueing system that's deployed on the cloud's virtual machines. The cloud users will have to manage the operational aspect of the queue deployment (examples such as products from Dell Boomi or Mulesoft), including configuring multi-AZ deployments to ensure high availability.

The second option is to use a queue or message bus service offered by the cloud platforms. The undifferentiated heavy lifting is managed by the CSPall that remains to the architect or user is to configure and consume the service. The leading services one may consider are AWS Simple Queue Service (SQS), AmazonMQ, Amazon Simple Notification Service (SNS), Azure Storage Queues and Service Bus Queues, and GCP Task Queue.

Cloud-based queue services reside on redundant infrastructures, guaranteeing at-least-once delivery, while some support First In First Out (FIFO). The ability of the service to scale allows the message queue to provide highly concurrent access to many producers and consumers. This makes it an ideal candidate to sit at the center of a decoupled Services Oriented Architecture (SOA).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset