Multi-region applications

Most of the big websites nowadays will be deployed in multiple physical locations. There are a number of obvious reasons for that. The first one, as we saw when we talked about CDNs is physical locality. If your packets have shorter distances to travel, the application will perform better thanks to the lower latency. The second big win is availability. We hosted our application in the us-east-1 region and while our application is hosted on multiple AZs, it is still possible that an issue could affect the entire region. Internet backbone problems, fiber cuts, or even natural disasters are things to expect over the years. As we all know from the first chapter of this book, AWS is available in multiple physical locations called regions so when the time is right, you can decide to make changes in your architecture and make your application available in multiple regions. In order to achieve this result, you will need to make a number of changes to the different services your application relies on.

The first set of changes will be around getting your application to be deployed in multiple regions. For that, you can easily edit your CloudFormation/troposphere scripts. AMIs are unique to a region, therefore you will need to include a mapping function as follows:

def AddAMI(template): 
    template.add_mapping("RegionMap", { 
        "us-east-1": {"AMI": "ami-a4c7edb2"}, 
        "us-west-2": {"AMI": "ami-6df1e514"}, 
        "eu-west-1": {"AMI": "ami-327f5352"} 
}) 

Then you will have to change the section of the code that hardcodes the AMI ID to be something like the following:

ImageId=FindInMap("RegionMap", Ref("AWS::Region"), "AMI"), 

The next big issue you will need to tackle is replicating your data across different regions. Data replication could be the subject of a book of its own because of how complex it is to cover all aspects of those topics, but at a high level, here are the things you need to know.

There are two ways to architecture your different regions. The first approach, the easiest one, is to create what is called an active-passive replication. You will have multiple regions and clients will be able to read from any regions, but the write operations are only happening in the active regions. Certain services including Aurora support it natively (http://amzn.to/2i3gRiE). Some other services like DynamoDB have some open source tools (http://bit.ly/2i3j911) that will help you to implant the replication, Some are natively global such as S3 (in the standard region), Route 53, or CloudFormation, but for a lot of the other services, you will need to handle the replication on your own.

The second approach is called active and as you might guess, in that configuration, each region will accept both read and write operations. This is the most complex scenario as you now need to handle eventual consistency as similar write operations can happen at the same time in different regions, but conflict with one another, that is, What happens when two users sign up for your service at the same time in different AWS regions but pick the same username?

The last set of change aspects to consider when implementing a multi-region replication is to decide how the traffic is being sent to one region or another. The easiest way to handle that is to implement the logic in Route 53, the DNS service from AWS. Route 53 provides a number of routing types including latency and geolocation-based routing. In addition, those rules can be combined with health checks allowing you to route the traffic away from unhealthy regions automatically.

All those changes can easily be implemented using the command-line interface and API as you would expect, but AWS also provides a nice GUI tool called Traffic Flow that lets you configure the rules you need easily. The GUI of the tool is shown as follows:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset