Summary

Throughout this chapter, we have explored a number of concepts and tools to scale a typical web application. We started off learning how to take advantage of the ELB and the Auto Scaling group services to build a solid foundation that will handle almost any amount of traffic by automatically scaling up and down the number of instances used by our application. In addition, this solution will handle possible failures by replacing bad instances and it works great with our deployment pipeline created in Chapter 4, Adding Continuous Integration and Continuous Deployment. While that solution will almost always work, it can become very costly if we don't try to optimize for cost and performance, therefore, we looked at using ElastiCache and CloudFront to take some load off our application and database.

In order to get to the next stage of scaling an application, we looked at breaking out our monolith into a service-oriented architecture and explored other AWS managed services such as ALB, SQS, and Kinesis for better load balancing and better service-to-service communication.

We finally explored Lambda and API Gateway as an alternative to EC2 to execute code on our behalf. Under the hood, Lambda functions will rely on a container system to execute our code and while the Lambda API doesn't expose anything to manage those containers, AWS has a service of its own called Elastic Container Service (ECS) to run applications using Docker containers. This will be the subject of Chapter 6, Running Containers in AWS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset