Chapter 13
Serverless Applications

THE AWS CERTIFIED DEVELOPER – ASSOCIATE EXAM TOPICS COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1: Deployment
  • check mark 1.4 Deploy serverless applications.
  • Domain 2: Security
  • check mark 2.1 Make authenticated calls to AWS Services.
  • check mark 2.3 Implement application authentication and authorization.
  • Domain 3: Development with AWS Services
  • check mark 3.1 Write code for serverless applications.
  • check mark 3.2 Translate functional requirements into application design.
  • check mark 3.3 Implement application design into application code.
  • check mark 3.3 Write code that interacts with AWS Services by using APIs, SDKs, and AWS CLI.
  • Domain 5: Monitoring and Troubleshooting
  • check mark 5.1 Write code that you can monitor.

Introduction to Serverless Applications

In the previous chapter, you learned about AWS Lambda and how you can write functions that run in a serverless manner. A serverless application is typically a combination of AWS Lambda and other Amazon services. You build serverless applications to allow developers to focus on their core product instead of the need to manage and operate servers or runtimes in the cloud or on-premises. This reduces overhead and lets developers reclaim time and energy that can be better spent developing reliable, scalable products and new features for applications.

Serverless applications have the following three main benefits:

  • No server management
  • Flexible scaling
  • Automated high availability

Without server management, you no longer have to provision or maintain servers. With AWS Lambda, you upload your code, run it, and focus on your application updates.

With flexible scaling, you no longer have to disable Amazon Elastic Compute Cloud (Amazon EC2) instances to scale them vertically, groups do not need to be auto-scaled, and you do not need to create Amazon CloudWatch alarms to add them to load balancers. With AWS Lambda, you adjust the units of consumption (memory and execution time) and AWS adjusts the rest of the instance appropriately.

Finally, serverless applications have built-in availability and fault tolerance. You do not need to architect for these capabilities, as the services that run the application provide them by default. Additionally, when periods of low traffic occur in the web application, you do not spend money on Amazon EC2 instances that do not run at their full capacity.

Web Server with Amazon Simple Storage Service (Presentation Tier)

Amazon Simple Storage Service (Amazon S3) can store HTML, CSS, images, and JavaScript files within an Amazon S3 bucket, and can host the website like a traditional web server. Though Amazon S3 hosts static websites, today many websites are dynamic applications, where you can use JavaScript to create HTTP requests. These HTTP requests are sent to a Representational State Transfer (REST) endpoint service called Amazon API Gateway, which allows the application to save and retrieve data dynamically.

Amazon API Gateway opens up a variety of application tier possibilities. An internet-accessible HTTPS API can be consumed by any client capable of HTTPS communication. Some common presentation tier examples that you could use for your application’s include the following:

Mobile app Not only can you integrate with custom business logic via Amazon API Gateway and AWS Lambda, you can use Amazon Cognito to create and manage user identities.

Static website content hosted in Amazon S3 You can enable your Amazon API Gateway APIs to be cross-origin resource sharing–compliant. This allows web browsers to invoke your APIs directly from within the static web pages.

Any other HTTPS-enabled client device Many devices can connect and communicate via HTTPS. There is nothing unique or proprietary about how clients communicate with the APIs that you create with the Amazon API Gateway service; it is pure HTTPS. No specific client software or licenses are required.

Additionally, there are several JavaScript frameworks that are widely available today, such as Angular and React, which allow you to benefit from a Model-View-Controller (MVC) architecture.

Amazon S3 Static Website

Symbol of Note For the remainder of this chapter, the example bucket’s name is examplebucket. This is for illustration purposes only, as Amazon S3 bucket names must be globally unique.

To create an Amazon S3 static website, you first need to create a bucket. Name the bucket something meaningful, such as examplebucket. When you use virtual hosted-style buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that do not contain periods. To work around this, use HTTP or write your own certificate verification logic. AWS recommends that you do not use periods (.) in bucket names when using virtual hosted-style buckets with SSL.

After you create your Amazon S3 bucket, you must enable and configure it to use static website hosting, index document, error document, and redirection rules (optional) in the AWS Management Console ➢ Amazon S3 Service. Use this examplebucket bucket to host a website.

The Amazon S3 bucket includes the region based on latency, cost, and regulatory requirements. Each object has a unique key. You grant permissions at the object or bucket level.

For the index document, enter the name of your home page’s HTML file (typically index.html). Additionally, you may load a custom error page such as error.html. As with many of the Amazon services, you can make these changes with the AWS Command Line Interface (AWS CLI) or in an AWS software development kit (AWS SDK). To enable this option with the AWS CLI, run the following command:

aws S3 website s3://examplebucket/ --index-document index.html --error-document error.html

After you enable the Amazon S3 static website hosting feature, enter an endpoint that reflects your AWS Region: examplebucket.s3-website.region.amazonaws.com.

Configuring Web Traffic Logs

Amazon S3 allows you to log and capture information such as the number of visitors who access your website. To enable logs, create a new Amazon S3 bucket to store your logs. This excludes your log files from the website-hosting bucket. You can create a logs-examplebucket-com bucket, and inside of that bucket, you can create a folder you call logs/ (or any name you choose). Use this folder to store all of your logs.

Symbol of Note The term folder is used to describe logs; however, the Amazon S3 data model is a flat structure that allows you to create a bucket, and the bucket stores objects. There is no hierarchy of sub-buckets or subfolders; nevertheless, you can infer a logical hierarchy using key name prefixes and delimiters as the Amazon S3 console does. In other words, you should know that, as a developer, there is technically no such thing as an Amazon S3 folder—it is simply a key.

Now you use Amazon S3 to enable the static website–hosted bucket called examplebucket to log files. You can configure the target bucket for the log files at logs-examplebucket-com, and you can create a target prefix to send log files to a particular prefix key only.

To enable this feature with the AWS CLI, create an access control list that provides access to the log files that you want to create and then apply the logging policy. Here’s an example:

aws s3api put-bucket-acl --bucket examplebucket --grant-write  'URI="http://acs.amazonaws.com.cn/groups/s3/LogDelivery"' --grant-read-acp 'URI="http://acs.amazonaws.com.cn/groups/s3/LogDelivery"'

 
aws s3api put-bucket-logging --bucket examplebucket --bucket-logging-status file://logging.json

Here’s the file logging.json:

{
  "LoggingEnabled": {
    "TargetBucket": "examplebucket",
    "TargetPrefix": "examplebucket/",
    "TargetGrants": [
      {
        "Grantee": {
          "Type": "AmazonCustomerByEmail",
          "EmailAddress": "[email protected]"

        },
        "Permission": "FULL_CONTROL"
      },
      {
        "Grantee": {
          "Type": "Group",
          "URI": "http://acs.amazonaws.com/groups/global/AllUsers"

        },
        "Permission": "READ"
      }
    ]
  }
}

Creating Custom Domain Name with Amazon Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

You may not want to use the Amazon S3 endpoint such as bucket-name.s3- website-region.amazonaws.com. Instead, you may want a more user-friendly URL such as myexamplewebsite.com. To accomplish this, purchase a domain name with Amazon Route 53.

Symbol of Note You can purchase your domain from another provider and then update the name servers to use Amazon Route 53.

Amazon Route 53 effectively connects user requests to infrastructure running in AWS—such as Amazon EC2 instances, Elastic Load Balancing (ELB) load balancers, or Amazon S3 buckets—and can route users to infrastructure outside of AWS. You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy endpoints or to monitor independently the health of your application and its endpoints. Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a variety of routing types, including latency-based routing, geolocation, geoproximity, and weighted round-robin, all of which can be combined with DNS failover to enable a variety of low-latency, fault-tolerant architectures.

Using Amazon Route 53 Traffic Flow’s simple visual editor, you can easily manage how your end users are routed to your application’s endpoints—whether in a single AWS Region or distributed around the globe. Amazon Route 53 also offers domain name registration. You can purchase and manage domain names such as example.com, and Amazon Route 53 will automatically configure DNS settings for your domains.

Speeding Up Content Delivery with Amazon CloudFront

Latency is an increasingly important aspect when you deliver web applications to the end user, as you always want your end user to have an efficient, low-latency experience on your website. Increased latency can result in both decreased customer satisfaction and decreased sales. One way to decrease latency is to use Amazon CloudFront to move your content closer to your end users. Amazon CloudFront has two delivery methods to deliver content. The first is a web distribution, and this is for storing of .html, .css, and graphic files. Amazon CloudFront also provides the ability to have an RTMP distribution, which speeds up distribution of your streaming media files using Adoble Flash Media Server’s RTMP protocol. An RTMP distribution allows an end user to begin playing a media file before the file has finished downloading from a CloudFront edge location.

To use Amazon CloudFront with your Amazon S3 static website, perform these tasks:

  1. Choose a delivery method.

    In the example, Amazon S3 is used to store a static web page; thus, you will be using the Web delivery method. However, as mentioned previously, you could also use RTMP for streaming media files.

  2. Specify the cache behavior. A cache behavior lets you configure a variety of CloudFront functionality for a given URL path pattern for files on your website.
  3. Choose the distribution settings and network that you want to use. For example, you can use all edge locations or only U.S., Canada, and Europe locations.

Amazon CloudFront enables you to cache your data to minimize redundant data-retrieval operations. Amazon CloudFront reduces the number of requests to which your origin server must respond directly. This reduces the load on your origin server and reduces latency because more objects are served from Amazon CloudFront edge locations, which are closer to your users.

The Amazon S3 bucket pushes the first request to Amazon CloudFront’s cache. The second, third, and nth requests pull from the Amazon CloudFront’s cache at a lower latency and cost, as shown in Figure 13.1.

The figure shows an example of Amazon CloudFront cache.

Figure 13.1 Amazon CloudFront cache

The more requests that Amazon CloudFront is able to serve from edge caches as a proportion of all requests (that is, the greater the cache hit ratio), the fewer viewer requests that Amazon CloudFront needs to forward to your origin to get the latest version or a unique version of an object. You can view the percentage of viewer requests that are hits, misses, and errors in the Amazon CloudFront console.

A number of factors affect the cache hit ratio. You can adjust your Amazon CloudFront distribution configuration to improve the cache hit ratio.

Use Amazon CloudFront with Amazon S3 to improve your performance, decrease your application’s latency and costs, and provide a better user experience. Amazon CloudFront is also a serverless service, and it fits well with serverless stack services, especially when you use it in conjunction with Amazon S3.

Dynamic Data with Amazon API Gateway (Logic or App Tier)

This section details how to use dynamic data with the Amazon API Gateway service in a logic tier or app tier.

Amazon API Gateway is a fully managed, serverless AWS service, with no server that runs inside your environment to define, deploy, monitor, maintain, and secure APIs at any scale. Clients integrate with the APIs that use standard HTTPS requests. Amazon API Gateway can integrate with a service-oriented multitier architecture with Amazon services, such as AWS Lambda and Amazon EC2. It also has specific features and qualities that make it a powerful edge for your logic tier. You can use these features and qualities to enhance and build your dynamic web application.

The Amazon API Gateway integration strategy that provides access to your code includes the following:

Control service Uses REST to provide access to Amazon services, such as AWS Lambda, Amazon Kinesis, Amazon S3, and Amazon DynamoDB. The access methods include the following:

  • Consoles
  • CLI
  • SDKs
  • REST API requests and responses

Execution service Uses standard HTTP protocols or language-specific SDKs to deploy API access to backend functionality.

Symbol of Warning Do not directly expose resources or the API—always use AWS edge services and the Amazon API Gateway service to safeguard your resources and APIs.

Endpoints

There are three types of endpoints for Amazon API Gateway.

Regional endpoints Live inside the AWS Region, such as us-west-2.

Edge optimized endpoints Use Amazon CloudFront, a content delivery web service with the AWS global network of edge locations as connection points for clients, and integrate with your API.

Private endpoints Can live only inside of a virtual private cloud (VPC).

You use Amazon API Gateway to help drive down the total response time latency of your API. You can improve the performance of specific API requests with Amazon API Gateway to store responses in an optional in-memory cache. This not only provides performance benefits for API requests that repeat, but it also reduces backend executions, which helps to reduce overall costs.

The API endpoint can be a default host name or a custom domain name. The default host name is as follows:

{api-id}.execute-api.{region}.amazonaws.com

Resources

Amazon API Gateway consists of resources and methods. A resource is an object that provides operations you use to interact with HTTP commands such as GET, POST, or DELETE. If you combine a resource path with a specific operation on a resource, you create a method. Users can call API methods to obtain controlled access to resources and to receive a response. You define mappings between the method and the backend to maintain control. If the frontend payload does not match the corresponding backend payload, you can create mapping templates to enable them to communicate and return a response.

Before you can interact with a resource, you use a model to describe the data format for the request or response. You use the model with the AWS SDK for an API to validate data and generate a mapping template. Models save time and money, as they reduce the likelihood that your API will experience security and reliability issues.

In the Amazon API Gateway service, you expose addressable resources as a tree of API Resources entities, with the root resource (/) at the top of the hierarchy. The root resource is relative to the API’s base URL, which consists of the API endpoint and a stage name. In the Amazon API Gateway console, this base URL is referred to as the Invoke URL, and it displays in the API’s stage editor after the API deploys.

If you own a pizza restaurant and run a website to display your menu options, you can create a root resource called menu (for production) that results in /menu with a GET method and returns the JSON values for your entire menu. When individuals visit your website and navigate to the menu, you can return all of this data.

For example, the following:

{api-id}.execute-api.region.amazonaws.com/menu

will return the dataset through the Amazon API Gateway service:

[
 {
     "id": 1,
     "menu-item": "cheese pizza",
     "price": "14.99"
 },
 {
     "id": 2,
     "menu-item": "pepperoni pizza",
     "price": "17.99"
 }
]

With resources, you create paths such as /menu, /specials, and /orders to pull different datasets with HTTP methods.

HTTP Methods

The Internet Engineering Task Force (IETF) is responsible for developing and documenting the HTTP protocol and how it operates. Amazon API Gateway uses the HTTP protocol to process these HTTP methods. Amazon API Gateway supports the following methods:

  • GET
  • HEAD
  • POST
  • PUT
  • PATCH
  • OPTIONS
  • DELETE

These methods send and receive data to and from the backend. Serverless data can be sent to AWS Lambda to process.

Stages

A stage is a named reference to a deployment, which is a snapshot of the API. Use a stage to manage and optimize a particular deployment. For example, stage settings enable caching, customize request throttling, configure logging, define stage variables, or attach a canary release to test. A canary release is a software deployment strategy in which a new version of an API is deployed at the same time that the original base version remains deployed as a production release. This means that in a canary deployment, you will have the majority of your traffic route to the current production environment and will have a small portion of your traffic route to the canary environment for testing purposes.

When you create a stage, your API is considered deployed and accessible to whomever you grant access. An advisable API strategy is to create stages for each of your environments such as DEV, TEST, and PROD, so that you can continue to develop and update your API and applications without affecting production.

Authorizers

Use Amazon API Gateway to set up authorizers with Amazon Cognito user pools on an AWS Lambda function. This enables you to secure your APIs and only allow users to whom you have granted specific access to your API.

Symbol of Tip You have a customer relationship management application, and you only want certain users to be able to modify customer data. With authorizers, you create an API and restrict who can call that API with an authorizer in conjunction with AWS Lambda or Amazon Cognito.

API Keys

With the Amazon API Gateway service, you can generate API keys to provide access to your API for external users, use them to sell to your customer base, and use the API call apikey:create to create an API key.

Cross-Origin Resource Sharing

Cross-origin resource sharing (CORS) remedies the inability of a client-side web application that runs on one server to be retrieved from another service. This remedy is called a same-origin policy, and primarily it prevents malicious actors from calling your APIs from different servers and creates a denial of service for your endpoint. While you implement CORS, you still need servers to exchange data for valid reasons, such as to deliver APIs to different users, clients, or customers. You can read the specification at https://www.w3.org/TR/cors/.

CORS allows you to set certain HTTP headers to enable cross-origin access to call APIs or services to which you need access. The HTTP headers include the following:

  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials
  • Access-Control-Allow-Headers
  • Access-Control-Allow-Methods
  • Access-Control-Expose-Headers
  • Access-Control-Max-Age
  • Access-Control-Request-Headers
  • Access-Control-Request-Method
  • Origin

To use Amazon API Gateway, you must enable the CORS resource inside of the Amazon API Gateway console so that your web application makes calls to the Amazon API Gateway service successfully. Without CORS, any calls made to the Amazon API Gateway service will fail.

Integrating with AWS Lambda

With Amazon API Gateway, you can build RESTful APIs without the need to manage a server. Amazon API Gateway gives your application a simple way (HTTPS requests) to leverage the innovation of AWS Lambda directly. Amazon API Gateway forms the bridge that connects your presentation tier and the functions you write in AWS Lambda. After defining the client/server relationship with your API, the contents of the client’s HTTPS request can be passed to AWS Lambda for execution, where you can write a function to talk to your database tier. For example, once someone accesses the API endpoint, contents of the request—which includes the request metadata, request headers, and the request body—can be passed to AWS Lambda. This then allows AWS Lambda to request dynamic data from your database tier—for example, Amazon DynamoDB.

Monitoring Amazon API Gateway with Amazon CloudWatch

Amazon API Gateway also integrates with Amazon CloudWatch. Amazon CloudWatch provides preconfigured metrics to help you monitor your APIs and build both dashboards and alarms. At the time of this writing, there are nine metrics available by default with Amazon CloudWatch, as shown in Table 13.1.

Table 13.1 Amazon CloudWatch Metrics

Metric Description
4XXError The number of client-side errors captured in a specified period. The Sum statistic represents this metric, namely, the total count of the 4XXError errors in the given period. The Average statistic represents the 4XXError error rate, namely, the total count of the 4XXError errors divided by the total number of requests during the period. The denominator corresponds to the Count metric.

Unit: Count

5XXError The number of server-side errors captured in a given period. The Sum statistic represents this metric, namely, the total count of the 5XXError errors in the given period. The Average statistic represents the 5XXError error rate, namely, the total count of the 5XXError errors divided by the total number of requests during the period. The denominator corresponds to the Count metric.

Unit: Count

CacheHitCount The number of requests served from the API cache in a given period. The Sum statistic represents this metric, namely, the total count of the cache hits in the specified period. The Average statistic represents the cache hit rate, namely, the total count of the cache hits divided by the total number of requests during the period. The denominator corresponds to the Count metric.

Unit: Count

CacheMissCount The number of requests served from the backend in a given period, when API caching is enabled. The Sum statistic represents this metric, namely, the total count of the cache misses in the specified period. The Average statistic represents the cache miss rate, namely, the total count of the cache hits divided by the total number of requests during the period. The denominator corresponds to the Count metric.

Unit: Count

Count The total number API requests in a given period. The SampleCount statistic represents this metric.

Unit: Count

IntegrationLatency The time between when Amazon API Gateway relays a request to the backend and when it receives a response from the backend.

Unit: Millisecond

Latency The time between when Amazon API Gateway receives a request from a client and when it returns a response to the client. The latency includes the integration latency and other Amazon API Gateway overhead.

Unit: Millisecond

See Figure 13.2 for a sample dashboard.

The figure shows a screenshot illustrating sample dashboard for Amazon API Gateway using Amazon CloudWatch.

Figure 13.2 Sample dashboard for Amazon API Gateway using Amazon CloudWatch

If you use Amazon CloudWatch with Amazon API Gateway, you can monitor your application from an API standpoint to see whether any issues occur as the application is being used. Particularly, you can view metrics such as CacheMissCount and Latency.

Other Notable Features

Amazon API Gateway has several notable features.

Security Amazon API Gateway exposes HTTPS endpoints only. AWS recommends that you use IAM roles and policies to secure access to the backend, but you can use Lambda authorizers too. The adminstrator-managed policy is AmazonAPIGatewayAdministrator.

Definition support The OpenAPI Specification, formerly known as Swagger Specification, is used to define a RESTful interface. If you create a document that conforms to the OpenAPI Specification, you can upload it to Amazon API Gateway to have it create your desired API endpoint. For more information on the OpenAPI Specification you can visit https://swagger.io/specification.

Free tier Amazon API Gateway has a free tier, and it allows one million API receive calls per month, for free, for the first 12 months.

User Authentication with Amazon Cognito

A crucial aspect of building web applications is user authentication. Nearly every web application today has a user authentication system. From banking websites to social media websites, user authentication is a critical component to secure your web and mobile applications. Amazon Cognito allows for simple and secure user sign-up, sign-in, and access control mechanisms designed to handle web application authentication.

Amazon Cognito includes the following features:

  • Amazon Cognito user pools, which are secure and scalable user directories
  • Amazon Cognito identity pools (federated identities), which offer social and enterprise identity federation
  • Standards-based Web Identity Federation Authentication through Open Authorization (OAuth) 2.0, Security Assertion Markup Language (SAML) 2.0, and OpenID Connect (OIDC) support
  • Multi-factor authentication
  • Encryption for data at rest and data in transit
  • Access control with AWS Identity and Access Management (IAM) integration
  • Easy application integration (prebuilt user interface)
  • iOS Object C, Android, iOS Swift, and JavaScript
  • Adherence to compliance requirements such as Payment Card Industry Data Security Standard (PCI DSS)

Amazon Cognito User Pools

A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Users can also sign in through social identity providers, such as Facebook or Amazon, and through Security Assertion Markup Language (SAML) identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK.

User pools provide the following:

  • Sign-up and sign-in services
  • A built-in, customizable web user interface (UI) to sign in users
  • Social sign-in with Facebook, Google, and Amazon, and sign-in with Security Assertion Markup Language (SAML) identity providers from your user pool
  • User directory management and user profiles
  • Security features, such as multi-factor authentication (MFA), check for compromised credentials, account takeover protection, and phone and email verification
  • Customized workflows and user migration through AWS Lambda triggers
  • After successfully authenticating a user, Amazon Cognito issues JSON Web Tokens (JWT) that you can use to secure and authorize access to your own APIs or exchange them for AWS credentials.

With Amazon Cognito, you can choose how you want your users to sign in: with a username, an email address, and/or a phone number. Additionally, user pools allow you to select attributes. Attributes are properties that you want to store about your end users, with standard attributes that are created for you, if you enable the option. You can also develop custom attributes.

The standard attributes are as follows:

  • address
  • birthdate
  • email
  • family name
  • gender
  • given name
  • locale
  • middle name
  • name
  • nickname
  • phone number
  • picture
  • preferred username
  • profile
  • zoneinfo
  • updated at
  • website

Password Policies

In addition to attributes, you can configure password policies. You can set the minimum password length and require specific character types, including uppercase letters and lowercase letters. Furthermore, you can either allow users to sign up and enroll themselves or allow only administrators to create users. If administrators create the account, you can also set the account to expire if it remains unused for a specified period of time.

Multi-factor Authentication

Multi-factor authentication (MFA) prevents anyone from signing in to a system without authenticating through two different sources, such as a password and a mobile-device generated token. With Amazon Cognito, you can enable multi-factor authentication to secure your application further. To enable this option with Amazon Cognito, create a role that enables Amazon Cognito to send Short Message Service (SMS) messages to users.

Besides MFA, you can customize your SMS verification messages, email verification messages, and user invitation messages. For example, you could send your end users a welcome message when they verify their account.

Device Tracking and Remembering

If you enable multi-factor authentication, this increases the security of an application to require a second authentication challenge from the user. However, this does require a new two-factor sign-in after a prolonged absence of activity, even when the user device has not been signed out or shut off.

With device tracking and remembering, you can save that user’s device and remember it so that they do not have to provide a token again, as the application has already seen this specific device. Figure 13.3 shows how to enable this feature.

The figure shows a screenshot illustrating how to enable device tracking.

Figure 13.3 Device tracking

The specifics of the configuration terminology include the following:

Tracked A tracked device is assigned a set of device credentials and consists of a key and secret key pair. You can view all tracked devices for a specific user on the Users screen of the Amazon Cognito console. In addition, you can view the devices metadata (whether it is remembered, the time it began being tracked, the last authenticated time, and such) and the devices usage.

Remembered A remembered device is also tracked. During user authentication, the key and secret pair assigned to a remembered device authenticates the device to verify that it is the same device that the user previously used to sign in to the application. You can view remembered devices in the Amazon Cognito console.

Not remembered A not-remembered device, while still tracked, is treated as if it was never used during the user authentication flow. The device credentials are not used to authenticate the device. The new APIs in the AWS Mobile SDK do not expose these devices, but you can see them in the Amazon Cognito console.

The first configuration setting reads “Do you want to remember devices?” and has the following options:

No (the default) Devices are neither remembered nor tracked.

Always Every device used with your application is remembered.

User opt-in The user’s device is remembered only if that user opts to remember the device. This enables your users to decide whether your application should remember the devices they use to sign in, though all devices are tracked regardless of which setting they choose. This is a useful option when you require a higher security level, but the user may sign in from a shared device. For example, if a user signs in to a banking application from a public computer at a library, the user requires the option to decide whether their device is to be remembered.

The second configuration option is “Do you want to use a remembered device to suppress the second factor during multi-factor authentication (MFA)?” It appears when you select either Always or User Opt-In for the first configuration option. The second factor suppression option enables your application to use a remembered device as a second factor of authentication, and it suppresses the SMS-based challenge in the MFA flow. This feature works together with MFA, and it requires MFA to be enabled for the user pool. The device must first be remembered before it can be used to suppress the SMS-based challenge. Upon the initial sign-in with a new device, the user must complete the SMS challenge. Afterward, the user no longer has to complete the SMS challenge.

User Interface Customization

An Amazon Cognito user pool includes a prebuilt user interface (UI) that you can use inside of your application to build a user authentication flow quickly, as shown in Figure 13.4.

The figure shows a screenshot of an IPhone illustrating Amazon Cognito prebuilt user interface (UI).

Figure 13.4 Amazon Cognito prebuilt UI

You can modify the UI with the AWS Management Console, the AWS CLI, or the API. You can also upload your own custom logo with a maximum file size of 100 KB. The CSS classes you can customize in the prebuilt UI are as follows:

  • background-customizable
  • banner-customizable
  • errorMessage-customizable
  • idpButton-customizable
  • idpButton-customizable:hover
  • inputField-customizable
  • inputField-customizable:focus
  • label-customizable
  • legalText-customizable
  • logo-customizable
  • submitButton-customizable
  • submitButton-customizable:hover
  • textDescription-customizable

You can customize the UI and CLI with two commands: get-ui-customization to retrieve the customization settings and set-ui-customization to set the UI customization, as shown in the following example code:

aws cognito-idp get-ui-customization
aws cognito-idp set-ui-customization --user-pool-id <your-user-pool-id> --client-id <your-app-client-id> --image-file <path-to-logo-image-file>  --css ".label-customizable{ color: <color>;}"

Amazon Cognito Identity Pools

Amazon Cognito identity pools allow you to create unique identities and assign permissions for your users. Inside the identity pool, you can include the following:

  • Users in an Amazon Cognito user pool
  • Users who authenticate with external identity providers such as Facebook, Google, or a SAML-based identity provider
  • Users authenticated via your own existing authentication process

An identity pool allows you to obtain temporary AWS credentials with permissions that you define either to access other Amazon services directly or to access resources through Amazon API Gateway. Amazon Cognito identity pools help you integrate several authentication providers, such as the following:

  • Amazon Cognito user pools
  • Amazon.com users
  • Facebook
  • Google
  • Twitter
  • OpenID
  • SAML
  • Custom—supports your own identities such as (login).(mycompany).(myapp)

Once you enable the third-party resources that you want to allow to sign in to your apps, you can assign permissions to these users. With the combination of user pools and identity pools, you can create a serverless user authentication system.

Use this command to create an Amazon Cognito user pool with the CLI:

aws cognito-idp create-user-pool --pool-name <value>

Amazon Cognito SDK

You can start developing for Amazon Cognito using the AWS Mobile SDK. Amazon Cognito currently supports the following SDKs through the AWS Mobile SDK:

  • JavaScript SDK
  • iOS SDK
  • Android SDK

In addition to using the higher-level mobile and JavaScript SDKs, you can also use the lower-level APIs available via the following AWS SDKs to integrate all Amazon Cognito functionality in your applications:

  • Java SDK
  • .NET SDK
  • Node.js SDK
  • Python SDK
  • PHP SDK
  • Ruby SDK

Standard Three-Tier vs. the Serverless Stack

This chapter has introduced serverless services and their benefits. Now that you know about some of the serverless services that are available in AWS, let’s compare a traditional three-tier application against a serverless application architecture. Figure 13.5 shows a typical three-tier web application.

The figure shows typical three-tier web infrastructure architecture.

Figure 13.5 Standard three-tier web infrastructure architecture

Source: https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_web_01.pdf

This architecture uses the following components and services:

  • Routing: Amazon Route 53
  • Content distribution network (CDN): Amazon CloudFront
  • Static data: Amazon S3
  • High availability/decoupling: Application load balancers
  • Web servers: Amazon EC2 with Auto Scaling
  • App servers: Amazon EC2 with Auto Scaling
  • Database: Amazon RDS in a multi-AZ configuration

Amazon Route 53 provides a DNS service that allows you to take domain names such as examplecompany.com and translate them to an IP address that points to running servers.

The CDN shown in Figure 13.6 is the Amazon CloudFront service, which improves your site performance with the use of its global content delivery network.

The figure shows an example of serverless web application architecture.

Figure 13.6 Serverless web application architecture

Source: https://aws.amazon.com/getting-started/projects/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/

Amazon S3 stores your static files such as photos or movie files.

Application load balancers are responsible for distributing load across Availability Zones to your Amazon EC2 servers, which run your web application with a service such as Apache or NGINX.

Application servers are responsible for performing business logic prior to storing the data in your database servers that are run by Amazon RDS.

Amazon RDS is the managed database server, and it can run an Amazon Aurora, Microsoft SQL Server, Oracle SQL Server, MySQL, PostgreSQL, or MariaDB database server.

While this architecture is a robust and highly available service, there are several downsides, including the fact that you have to manage servers. You are responsible for patching those servers, preventing downtime associated with those patches, and proper server scaling.

In a typical serverless web application architecture, you also run a web application, but you have zero servers that run inside your AWS account, as shown in Figure 13.6.

Serverless web application architecture services include the following:

  • Routing: Amazon Route 53
  • Web servers/static data: Amazon S3
  • User authentication: Amazon Cognito user pools
  • App servers: Amazon API Gateway and AWS Lambda
  • Database: Amazon DynamoDB

Amazon Route 53 is your DNS, and you can use Amazon CloudFront for your CDN.

You can also use Amazon S3 for your web servers. In this architecture, you use Amazon S3 to host your entire static website. You use JavaScript to make API calls to the Amazon API Gateway service.

For your business or application servers, you use Amazon API Gateway in conjunction with AWS Lambda. This allows you to retrieve and save data dynamically.

You use Amazon DynamoDB as a serverless database service, and you do not provision any Amazon EC2s inside of your Amazon VPC account. Amazon DynamoDB is also a great database service for storing session state for stateful applications. You can use Amazon RDS instead if you need a relational database. However, it would not then be a fully serverless stack. There is a new service released called Amazon Aurora Serverless, which is a full RDS MySQL 5.6–compatible service that is completely serverless. This would allow you to run a traditional SQL database, but one that has the benefit of being serverless. Amazon Aurora Serverless is discussed in the next section.

You use Amazon Cognito user pools for user authentication, which provides a secure user directory that can scale to hundreds of millions of users. Amazon Cognito User Pools is a fully managed service with no servers for you to manage. While user authentication was not shown in Figure 13.6, you can use your web server tier to talk to a user directory, such as Lightweight Directory Access Protocol (LDAP), for user authentication.

As you can see, while some of the components are the same, you may use them in slightly different ways. By taking advantage of the AWS global network, you can develop fully scalable, highly available web applications—all without having to worry about maintaining or patching servers.

Amazon Aurora Serverless

Amazon Aurora Serverless is an on-demand, auto-scaling configuration for the Aurora MySQL-compatible edition, where the database automatically starts, shuts down, and scales up or down as needed by your application. This allows you to run a traditional SQL database in the cloud without needing to manage any infrastructure or instances.

With Amazon Aurora Serverless, you also get the same high availability as traditional Amazon Aurora, which means that you get six-way replication across three Availability Zones inside of a region in order to prevent against data loss.

Amazon Aurora Serverless is great for infrequently used applications, new applications, variable workloads, unpredictable workloads, development and test databases, and multitenant applications. This is because you can scale automatically when you need to and scale down when application demand is not high. This can help cut costs and save you the heartache of managing your own database infrastructure.

Amazon Aurora Serverless is easy to set up, either through the console or directly with the CLI. To create an Amazon Aurora Serverless cluster with the CLI, you can run the following command:

aws rds create-db-cluster --db-cluster-identifier sample-cluster --engine aurora --engine-version 5.6.10a 
--engine-mode serverless --scaling-configuration  MinCapacity=4,MaxCapacity=32,SecondsUntilAutoPause=1000,AutoPause=true 
--master-username user-name --master-user-password password 
--db-subnet-group-name mysubnetgroup --vpc-security-group-ids sg-c7e5b0d2  –region us-east-1

Amazon Aurora Serverless gives you many of the similar benefits as other serverless technologies, such as AWS Lambda, but from a database perspective. Managing databases is hard work, and with Amazon Aurora Serverless, you can utilize a database that automatically scales and you don’t have to manage any of the underlying infrastructure.

AWS Serverless Application Model

The AWS Serverless Application Model (AWS SAM) allows you to create and manage resources in your serverless application with AWS CloudFormation to define your serverless application infrastructure as a SAM template. A SAM template is a JSON or YAML configuration file that describes the AWS Lambda functions, API endpoints, tables, and other resources in your application. With simple commands, you upload this template to AWS CloudFormation, which creates the individual resources and groups them into an AWS CloudFormation stack for ease of management. When you update your AWS SAM template, you re-deploy the changes to this stack. AWS CloudFormation updates the individual resources for you.

AWS SAM is an extension of AWS CloudFormation. You can define resources by using the AWS CloudFormation in your AWS SAM template. This is a powerful feature, as you can use AWS SAM to create a template of your serverless infrastructure, which you can then build into a DevOps pipeline. For example, examine the following:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: 'Example of Multiple-Origin CORS using API Gateway and Lambda'
Resources:
  ExampleRoot:
    Type: 'AWS::Serverless::Function'
    Properties:
      CodeUri: '.'
      Handler: 'routes/root.handler'
      Runtime: 'nodejs8.10'
      Events:
        Get:
          Type: 'Api'
          Properties:
            Path: '/'
            Method: 'get'
  ExampleTest:
    Type: 'AWS::Serverless::Function'
    Properties:
      CodeUri: '.'
      Handler: 'routes/test.handler'
      Runtime: 'nodejs8.10'
      Events:
        Delete:
          Type: 'Api'
          Properties:
            Path: '/test'
            Method: 'delete'
        Options:
          Type: 'Api'
          Properties:
            Path: '/test'
            Method: 'options'
 
Outputs:
    ExampleApi:
      Description: "API Gateway endpoint URL for Prod stage for API Gateway Multi-Origin CORS Function"
      Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region} .amazonaws.com/Prod/"
    ExampleRoot:
      Description: "API Gateway Multi-Origin CORS Lambda Function (Root) ARN"
      Value: !GetAtt ExampleRoot.Arn
    ExampleRootIamRole:
      Description: "Implicit IAM Role created for API Gateway Multi-Origin CORS Function (Root)"
      Value: !GetAtt ExampleRootRole.Arn
    ExampleTest:
      Description: "API Gateway Multi-Origin CORS Lambda Function (Test) ARN"
      Value: !GetAtt ExampleTest.Arn
    ExampleTestIamRole:
      Description: "Implicit IAM Role created for API Gateway Multi-Origin CORS Function (Test)"
      Value: !GetAtt ExampleTestRole.Arn

In the previous code example, you create two AWS Lambda functions and then associate three different Amazon API Gateway endpoints to trigger those functions. To deploy this AWS SAM template, download the template and all of the necessary dependencies from here:

https://github.com/awslabs/serverless-application-model/tree/develop/examples/apps/api-gateway-multiple-origin-cors

AWS SAM is similar to AWS CloudFormation, with a few key differences, as shown in the second line:

Transform: 'AWS::Serverless-2016-10-31'

This important line of code transforms the AWS SAM template into an AWS CloudFormation template. Without it, the AWS SAM template will not work.

Similar to the AWS CloudFormation, you also have a Resources property where you define infrastructure to provision. The difference is that you provision serverless services with a new Type called AWS::Serverless::Function. This provisions an AWS Lambda function to define all properties from an AWS Lambda point of view. AWS Lambda includes Properties, such as MemorySize, Timeout, Role, Runtime, Handler, and others.

While you can create an AWS Lambda function with AWS CloudFormation using AWS::Lambda::Function, the benefit of AWS SAM lies in a property called Event, where you can tie in a trigger to an AWS Lambda function, all from within the AWS::Serverless::Function resource. This Event property makes it simple to provision an AWS Lambda function and configure it with an Amazon API Gateway trigger. If you use AWS CloudFormation, you would have to declare an Amazon API Gateway separately with AWS::ApiGateway::Resource.

To summarize, AWS SAM allows you to provision serverless resources more rapidly with less code by extending AWS CloudFormation.

AWS SAM CLI

Now that we’ve addressed AWS SAM, let’s take a closer look at the AWS SAM CLI. With AWS SAM, you can define templates, in JSON or YAML, which are designed for provisioning serverless applications through AWS CloudFormation.

AWS SAM CLI is a command line interface tool that creates an environment in which you can develop, test, and analyze your serverless-based application, all locally. This allows you to test your AWS Lambda functions before uploading them to the AWS service. AWS SAM CLI also allows you to develop and test your code quickly, and this gives you the ability to test it locally, which allows you to develop it faster. Previously, you would have had to upload your code each time you wanted to test an AWS Lambda function. Now, with the AWS SAM CLI, you can develop faster and get your application out the door more quickly.

To use AWS SAM CLI, you must meet a few prerequisites. You must install Docker, have Python 2.7 or 3.6 installed, have pip installed, install the AWS CLI, and finally install the AWS SAM CLI. You can read more about how to install AWS SAM CLI at https://github.com/awslabs/aws-sam-cli.

With AWS SAM CLI, you must define three key things.

  • You must have a valid AWS SAM template, which defines a serverless application.
  • You must have the AWS Lambda function defined. This can be in any valid language that Lambda currently supports, such as Node.js, Java 8, Python, and so on.
  • You must have an event source. An event source is simply an event.json file that contains all the data that the Lambda function expects to receive. Valid event sources are as follows:
    • Amazon Alexa
    • Amazon API Gateway
    • AWS Batch
    • AWS CloudFormation
    • Amazon CloudFront
    • AWS CodeCommit
    • AWS CodePipeline
    • Amazon Cognito
    • AWS Config
    • Amazon DynamoDB
    • Amazon Kinesis
    • Amazon Lex
    • Amazon Rekognition
    • Amazon Simple Storage Service (Amazon S3)
    • Amazon Simple Email Service (Amazon SES)
    • Amazon Simple Notification Service (Amazon SNS)
    • Amazon Simple Queue Service (Amazon SQS)
    • AWS Step Functions

To generate this JSON event source, you can simply run this command in the AWS SAM CLI:

sam local generate-event <service> <event>

AWS SAM CLI is a great tool that allows developers to iterate quickly on their serverless applications. You will learn how to create and test an AWS Lambda function locally in the “Exercises” section of this chapter.

AWS Serverless Application Repository

The AWS Serverless Application Repository enables you to deploy code samples, components, and complete applications quickly for common use cases, such as web and mobile backends, event and data processing, logging, monitoring, Internet of Things (IoT), and more. Each application is packaged with an AWS SAM template that defines the AWS resources. Publicly shared applications also include a link to the application’s source code. There is no additional charge to use the serverless application repository. You pay only for the AWS resources you use in the applications you deploy.

You can also use the serverless application repository to publish your own applications and share them within your team, across your organization, or with the community at large. This allows you to see what other people and organizations are developing.

Serverless Application Use Cases

Case studies on running serverless applications are located at the following URLs:

Summary

This chapter covered the AWS serverless core services, how to store your static files inside of Amazon S3, how to use Amazon CloudFront in conjunction with Amazon S3, how to integrate your application with user authentication flows using Amazon Cognito, and how to deploy and scale your API quickly and automatically with Amazon API Gateway.

Serverless applications have three main benefits: no server management, flexible scaling, and automated high availability. Without server management, you no longer have to provision or maintain servers. With AWS Lambda, you upload your code, run it, and focus on your application updates. With flexible scaling, you no longer have to disable Amazon EC2 instances to scale them vertically, groups do not need to be auto-scaled, and you do not need to create Amazon CloudWatch alarms to add them to load balancers. With AWS Lambda, you adjust the units of consumption (memory and execution time), and AWS adjusts the rest of the instance appropriately. Finally, serverless applications have built-in availability and fault tolerance. When periods of low traffic occur, you do not spend money on Amazon EC2 instances that do not run at their full capacity.

You can use an Amazon S3 web server to create your presentation tier. Within an Amazon S3 bucket, you can store HTML, CSS, and JavaScript files. JavaScript can create HTTP requests. These HTTP requests are sent to a REST endpoint service called Amazon API Gateway, which allows the application to save and retrieve data dynamically by triggering a Lambda function.

After you create your Amazon S3 bucket, you configure it to use static website hosting in the AWS Management Console and enter an endpoint that reflects your AWS Region.

Amazon S3 allows you to configure web traffic logs to capture information, such as the number of visitors who access your website in the Amazon S3 bucket.

One way to decrease latency and improve your performance is to use Amazon CloudFront with Amazon S3 to move your content closer to your end users. Amazon CloudFront is a serverless service.

The Amazon API Gateway is a fully managed service designed to define, deploy, and maintain APIs. Clients integrate with the APIs using standard HTTPS requests. Amazon API Gateway can integrate with a service-oriented multitier architecture. The Amazon API Gateway provides dynamic data in the logic or app tier.

There are three types of endpoints for Amazon API Gateway: regional endpoints, edge-optimized endpoints, and private endpoints.

In the Amazon API Gateway service, you expose addressable resources as a tree of API Resources entities, with the root resource (/) at the top of the hierarchy. The root resource is relative to the API’s base URL, which consists of the API endpoint and a stage name.

You use Amazon API Gateways to help drive down the total response-time latency of your API. Amazon API Gateway uses the HTTP protocol to process these HTTP methods and send/receive data to and from the backend. Serverless data is sent to AWS Lambda to process.

You can use Amazon Route 53 to create a more user-friendly domain name instead of using the default host name (Amazon S3 endpoint). To support two subdomains, you create two Amazon S3 buckets that match your domain name and subdomain.

A stage is a named reference to a deployment, which is a snapshot of the API. Use a stage to manage and optimize a particular deployment. You create stages for each of your environments such as DEV, TEST, and PROD, so you can develop and update your API and applications without affecting production. Use Amazon API Gateway to set up authorizers with Amazon Cognito user pools on an AWS Lambda function. This enables you to secure your APIs.

An Amazon Cognito user pool includes a prebuilt user interface (UI) that you can use inside your application to build a user authentication flow quickly. A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Users can also sign in through social identity providers such as Facebook or Amazon and through Security Assertion Markup Language (SAML) identity providers.

Amazon Cognito identity pools allow you to create unique identities and assign permissions for your users to help you integrate with authentication providers. With the combination of user pools and identity pools, you can create a serverless user authentication system.

You can choose how users sign in with a username, an email address, and/or a phone number and to select attributes. Attributes are properties that you want to store about your end users. You can also configure password policies. Multi-factor authentication (MFA) prevents anyone from signing in to a system without authenticating through two different sources, such as a password and a mobile device–generated token. You create an Amazon Cognito role to send Short Message Service (SMS) messages to users.

The AWS Serverless Application Model (AWS SAM) allows you to create and manage resources in your serverless application with AWS CloudFormation as a SAM template. A SAM template is a JSON or YAML file that describes the AWS Lambda function, API endpoints, and other resources. You upload the template to AWS CloudFormation to create a stack. When you update your AWS SAM template, you redeploy the changes to this stack, and AWS CloudFormation updates the resources. You can use AWS SAM to create a template of your serverless infrastructure, which you can then build into a DevOps pipeline.

The Transform: 'AWS::Serverless-2016-10-31' code converts the AWS SAM template into an AWS CloudFormation template.

The AWS Serverless Application Repository enables you to deploy code samples, components, and complete applications for common use cases. Each application is packaged with an AWS SAM template that defines the AWS resources.

Additionally, you learned the differences between the standard three-tier web applications and the AWS serverless stack. You learned how to build your infrastructure quickly with AWS SAM and AWS SAM CLI for testing and development purposes.

Exam Essentials

Know serverless applications’ three main benefits. The benefits are as follows:

  • No server management
  • Flexible scaling
  • Automated high availability

Know what no server management means. Without server management, you no longer have to provision or maintain servers. With AWS Lambda, you upload your code, run it, and focus on your application updates.

Know what flexible scaling means. With flexible scaling, you no longer have to disable Amazon Elastic Compute Cloud (Amazon EC2) instances to scale them vertically, groups do not need to be auto-scaled, and you do not need to create Amazon CloudWatch alarms to add them to load balancers. With AWS Lambda, you adjust the units of consumption (memory and execution time), and AWS adjusts the rest of the instances appropriately.

Know what serverless applications mean. Serverless applications have built-in availability and fault tolerance. You do not need to architect for these capabilities, as the services that run the application provide them by default. Additionally, when periods of low traffic occur on the web application, you do not spend money on Amazon EC2 instances that do not run at their full capacity.

Know what services are serverless. On the exam, it is important to understand which Amazon services are serverless and which ones are not. The following services are serverless:

  • Amazon API Gateway
  • AWS Lambda
  • Amazon SQS
  • Amazon SNS
  • Amazon Kinesis
  • Amazon Cognito
  • Amazon Aurora Serverless
  • Amazon S3

Know how to host a serverless web application. Hosting a serverless application means that you need Amazon S3 to host your static website, which comprises your HTML, JavaScript, and CSS files. For your database infrastructure, you can use Amazon DynamoDB or Amazon Aurora Serverless. For your business logic tier, you can use AWS Lambda. For DNS services, you can utilize Amazon Route 53. If you need the ability to host an API, you can use Amazon API Gateway. Finally, if you need to decrease latency to portions of your application, you can utilize services like Amazon CloudFront, which allows you to host your content at the edge.

Resources to Review

Exercises

For this “Exercises” section, expand the OpenPets API Template that comes with Amazon API Gateway and build a frontend with HTML and JavaScript. You use AWS Lambda for some compute processing to save data to an Amazon DynamoDB database.

Exercise 13.1

Create an Amazon S3 Bucket for the Swagger Template

In this exercise, you use an AWS SAM template and a Swagger template to deploy your infrastructure. You will need to create an Amazon S3 bucket for the Swagger file.

  1. Create an Amazon S3 bucket.
    aws s3 mb s3://my-bucket-name --region us-east-1
     

    If the command was successful, you should see output similar to the following, which means the bucket has been created:

    make_bucket: my-bucket-name
  2. Upload the Swagger template.
    aws s3 cp petstore-api-swagger.yaml s3://my-bucket-name/petstore-api-swagger .yaml

    If the file was successfully uploaded, you should be able to navigate to the Amazon S3 bucket and see it. This file is for the Swagger template, and it is used to create the REST API inside the Amazon API Gateway. You have not yet deployed the API.

  3. Use AWS SAM to deploy your serverless infrastructure. To package your SAM template, run the following command:
    aws cloudformation package 
        --template-file ./petStoreSAM.yaml 
        --s3-bucket my-bucket-name 
        --output-template-file petStoreSAM-output.yaml 
        --region us-east-1

    If the command was successful, you should see that the file has been uploaded, and a new file called petStoreSAM-output.yaml has been created locally. You have packaged the AWS SAM template and converted it to a full AWS CloudFormation template. You will use this template in the next step to deploy the package to the Amazon API Gateway.

  4. Deploy the package.
    aws cloudformation deploy 
        --template-file ./petStoreSAM-output.yaml 
        --stack-name petStoreStack 
        --capabilities CAPABILITY_IAM 
        --parameter-overrides S3BucketName=s3://my-bucket-name/ petstore-api-swagger.yaml 
        --region us-east-1

    If the command was successful, you should see that the cloudformation stack has been deployed. While it is in the process of deploying the resources, you will see something similar to the following:

    Waiting for stack create/update to complete
    

    This make take a few minutes. When it is finished deploying, the console displays the following message:

    Successfully created/updated stack – petStoreStack

    You have now successfully deployed the cloudformation stack and can view the resources it created inside the AWS Management Console under the AWS CloudFormation service.

  5. After the stack is created, run the command and write the results down for subsequent steps:
    aws cloudformation describe-stacks --stack-name petStoreStack --region us-east-1 --query 'Stacks[0].Outputs[0].{PetStoreAPI:OutputValue}'

    After running this command, the URL for the API is returned. Navigate to this URL to view the default page returned by the PetStore API. You will be changing this in the next exercise.

You have successfully completed the first exercise, created your AWS SAM template, and deployed it using AWS CloudFormation. Now your Amazon API Gateway is active, and you have the URL for accessing it.

Exercise 13.2

Edit the HTML Files

In steps 1 through 5, you are going to update the URL inside your .html files to point to the Amazon API Gateway stage that you have created. You do this so that your web application (.html files) knows the endpoint where to send your pet data.

  1. Open index.html in the project folder and locate line 68 to find the variable named api_gw_endpoint. Input the value you retrieved from the previous command in Exercise 13.1.
    var api_gw_endpoint = "https://cdvhqasdfnk444fe.execute-api.us-east-1 .amazonaws.com/PetStoreProd/"
  2. Open pets.html.
  3. Input the value you received from the last command on line 96, and add /pets to the end of the string:
    var api_gw_endpoint = "https://cdvhqasdfnk444fe.execute-api.us-east-1.amazonaws.com/PetStoreProd/pets"
  4. Open add-pet.html.
  5. Input the value you received from the last command on line 87, and add /pets to the end.
    var api_gw_endpoint = "https://cdvhqasdfnk444fe.execute-api.us-east-1 .amazonaws.com/PetStoreProd/pets"
  6. Create a new Amazon S3 bucket for your website.
    aws s3 mb s3://my-bucket-name --region us-east-1
  7. Copy the project files to the website.
    aws s3 cp . s3://my-bucket-name --recursive
    aws s3 rm s3://my-bucket-name/sam –recursive

    Here you are uploading all the files from your project folder to Amazon S3 and then removing the SAM template from the bucket. You do not want others to have access to your template files and AWS Lambda functions. You want others to have access only to the end application.

  8. Change the Amazon S3 bucket name inside of the policy.json to your bucket name. This will be on line 12.
  9. Enable public read access for the bucket:
    aws s3api put-bucket-policy --bucket my-bucket-name --policy file://policy.json
    

    If successful, this command will not return any information. You are enabling the Amazon S3 bucket to be publicly accessible, meaning that everyone can access your website.

  10. Enable the static website.
    aws s3 website s3://my-bucket-name/ --index-document index.html --error-document index.html

    The Amazon S3 bucket now acts as a web server and is running your pet store application.

  11. Navigate to the website.
    url: http://my-bucket-name.s3-website-us-east-1.amazonaws.com/index.html
  12. Navigate Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and the AWS SAM template to view the configuration.

    Now that the application has been deployed, you can view all the individual components inside the AWS Management Console.

    Inside Amazon API Gateway, you should see the PetStoreAPIGW. If you review the resources, you will see the various HTTP methods that you are allowing for your API.

    In AWS Lambda, two functions were created: savePet for saving pets to Amazon DynamoDB and getPets for retrieving pets stored in Amazon DynamoDB.

    In Amazon DynamoDB, you should have a table called PetStore. You can view the items in this table, though by default there should be none. After you create your first pet, however, you will be able to see some items in the table.

    You can view the AWS SAM template and the AWS CloudFormation stack to see exactly how each of these resources were created.

Symbol of Warning With YAML, tab indentations are extremely important. Make sure that you have a valid YAML template. There are a variety of tools that you can use to validate YAML syntax. You can use the following websites to validate the YAML:

https://codebeautify.org/yaml-validator

http://www.yamllint.com/

If you want to perform client-side validation and not use a website, a number of IDEs support YAML validation. Refer to your IDE documentation to check for YAML support.

Exercise 13.3

Define an AWS SAM Template

In this exercise, you will develop an AWS Lambda function locally and then test that Lambda function using the AWS SAM CLI. To perform this exercise successfully, you must have AWS SAM CLI installed. For information on how to install the AWS SAM CLI, review the following documentation: https://github.com/awslabs/aws-sam-cli. The following steps assume that you have a working AWS SAM CLI installation.

  1. Once you have installed AWS SAM CLI, open your favorite integrated development environment (IDE) and define an AWS SAM template.
  2. Enter the following in your template file:
    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
     
    Description: Welcome to the Pet Store Demo
     
    Resources:
      PetStore:
        Type: AWS::Serverless::Function
        Properties:
          Runtime: nodejs8.10
          Handler: index.handler
  3. Save the file as template.yaml.

You have created the SAM template and saved the file locally. In subsequent exercises, you will use this information to execute an AWS Lambda function.

Exercise 13.4

Define an AWS Lambda Function Locally

Now that you have a valid SAM template, you can define your AWS Lambda function locally. In this example, use Nodejs 8.10, but you can use any AWS Lambda supported language.

  1. Open your favorite IDE, and type the following Nodejs code:
    'use strict';
     
    //A simple Lambda function
    exports.handler = (event, context, callback) => {
     
        console.log('This is our local lambda function');
        console.log('Creating a PetStore service');
        callback(null, "Hello " + event.Records[0].dynamodb.NewImage.Message.S + "! What kind of pet are you interested in?");
    }
  2. Save the file as index.js.

You have two files: an index.js and the SAM template. In the next exercise, you will generate an event source that will be used as the trigger for the AWS Lambda function.

Exercise 13.5

Generate an Event Source

Now that you have a valid SAM template and a valid AWS Lambda Nodejs 8.10 function, you can generate an event source.

  1. Inside your terminal, type the following to generate an event source:
    sam local generate-event dynamodb update > event.json

    This will generate an Amazon DynamoDB update event. For a list of all of the event sources, type the following:

    sam local generate-event –help
  2. Modify the event source JSON file (event.json). On line 17, change New Item! to your first and last names.
    "S": "John Smith"

You have now configured the three pieces that you need: the AWS SAM template, the AWS Lambda function, and the event source. In the next exercise, you will be able to run the AWS Lambda function locally.

Exercise 13.6

Run the AWS Lambda Function

Trigger and execute the AWS Lambda function.

  1. In your terminal, type the following to execute the AWS Lambda function:
    sam local invoke "PetStore" -e event.json

    You will see the following message:

    Hello Casey Gerena! What kind of pet are you interested in?

    The AWS Lambda Docker image is downloaded to your local environment, and the event.json serves as all of the data that will be received as an event source to the AWS Lambda function. Inside the AWS SAM template, you will have given this function the name PetStore; however, you can define as many functions as you need to in order to build your application.

Exercise 13.7

Modify the AWS SAM template to Include an API Locally

To make your pet store into an API, modify the template.yaml.

  1. Open the template.yaml file, and modify it to look like the following:
    AWSTemplateFormatVersion: '2010-09-09'
    Transform: AWS::Serverless-2016-10-31
     
    Description: Welcome to the Pet Store Demo
     
    Resources:
      PetStore:
        Type: AWS::Serverless::Function
        Properties:
          Runtime: nodejs8.10
          Handler: index.handler
          Events:
            PetStore:
              Type: Api
              Properties:
                Path: /
                Method: any
  2. Save the template.yaml file.

You have modified the AWS SAM template to connect an Amazon API Gateway event for any method (GET, POST, and so on) to the AWS Lambda function. In the next exercise, you will modify the AWS Lambda function to work with the API.

Exercise 13.8

Modify Your AWS Lambda Function for the API

After you have defined an API, modify your AWS Lambda function.

  1. Open the index.js file, and make the following changes:
    'use strict';
    
    //A simple Lambda function
    exports.handler = (event, context, callback) => {
     
        console.log('DEBUG: This is our local lambda function');
        console.log('DEBUG: Creating a PetStore service');
     
        callback(null, {
            statusCode: 200,
            headers: { "x-petstore-custom-header": "custom header from petstore service" },
            body: '{"message": "Hello! Welcome to the PetStore. What kind of Pet are you interested in?"}'
        })
     
    }
  2. Save the index.js file.

  • You have modified the AWS Lambda function to respond to an API REST request. However, you have not actually executed anything—you will do that in the next exercise.

Exercise 13.9

Run Amazon API Gateway Locally

Now that you have everything defined, run Amazon API Gateway locally.

  1. Open a terminal and type the following:
    sam local start-api

    You will see output that looks like the following. Take note of the URL.

    2018-10-11 23:05:25 Mounting PetStore at http://127.0.0.1:3000/hello [GET]
    2018-10-11 23:05:25 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
    2018-10-11 23:05:25  * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
  2. Open a web browser, and navigate to the previous URL.
  • You will see the following message:
    Message: "Hello! Welcome to the Pet Store. What kind of Pet are you interested in?"
  • When you navigate to the URL, the local API Gateway forwards the request to AWS Lambda, which is also running locally, provided by index.js. You can now build serverless applications locally. When you are ready to deploy to a development or production environment, deploy the serverless applications to the AWS Cloud with AWS SAM. This allows developers to iterate through their code quickly and make improvements locally.

Review Questions

  1. Which templating engine can you use to deploy infrastructure inside of AWS that is built for serverless technologies?

    1. AWS CloudFormation
    2. Ansible
    3. AWS OpsWorks for Automate Operations
    4. AWS Serverless Application Model (AWS SAM)
  2. What option do you need to enable to call Amazon API Gateway from another server or service?

    1. You do not need to enable any options. Amazon API Gateway is ready to use as soon as it’s deployed.
    2. Enable cross-origin resource sharing (CORS).
    3. Deploy a stage.
    4. Deploy a resource.
  3. A company is considering moving to the AWS serverless stack. What are two benefits of serverless stacks? (Select TWO.)

    1. No server management
    2. It costs less than Amazon Elastic Compute Cloud (Amazon EC2).
    3. Flexible scaling
    4. There are no benefits to serverless stacks.
  4. Can you create HTTP endpoints with Amazon API Gateway?

    1. Yes. You can create HTTP endpoints with Amazon API Gateway.
    2. No. API Gateway creates FTP endpoints.
    3. No. API Gateway only supports SSH endpoints.
    4. No. API Gateway is a secure service that only supports HTTPS.
  5. A company is moving to a serverless application, using Amazon Simple Storage Service (Amazon S3), AWS Lambda, and Amazon DynamoDB. They are currently using Amazon CloudFront for their content delivery network (CDN) network. They are concerned that they can no longer use Amazon CloudFront because they will have no Amazon Elastic Compute Cloud (Amazon EC2) instances running. Is their concern valid?

    1. Their concerns are valid: Amazon CloudFront only supports Amazon EC2.
    2. Their concerns are valid because all serverless applications are fully dynamic and contain no static information; thus, Amazon CloudFront does not support serverless applications.
    3. Their concerns are not valid. Amazon CloudFront supports serverless applications
    4. Their concerns are valid. Amazon CloudFront does support serverless applications; however, it does not support Amazon S3.
  6. Amazon Cognito Mobile SDK does not support which language/platform?

    1. iOS
    2. Android
    3. JavaScript
    4. All of these languages/platform are supported.
  7. Does Amazon Cognito support Short Message Service (SMS)–based multi-factor authentication (MFA)?

    1. No. Amazon Cognito does not support SMS-based MFA.
    2. No. Amazon Cognito does not support SMS-based MFA; however, it does support MFA.
    3. Yes. Amazon Cognito does support SMS-based MFA.
    4. None of the above.
  8. Does Amazon Cognito support device tracking and remembering?

    1. Amazon Cognito does not support device tracking and remembering.
    2. Amazon Cognito supports device tracking but not remembering.
    3. Amazon Cognito supports device remembering but not tracking.
    4. Amazon Cognito supports device remembering and tracking.
  9. What is the property name that you use to connect an AWS Lambda function to the Amazon API Gateway inside of an AWS Serverless Application Model (AWS SAM) template?

    1. events
    2. handler
    3. context
    4. runtime
  10. A company wants to use a serverless application to run its dynamic website that is currently running on Amazon Elastic Compute Cloud (Amazon EC2) and Elastic Load Balancing (ELB). Currently, the application uses HTML, CSS, and React, and the database is a NoSQL flavor. You are the advisor—is this possible?

    1. No. This is not possible, because there is no way to run React in AWS. React is a Facebook technology.
    2. No. This is not possible, because you need an Amazon EC2 to run the web server.
    3. No. This is not possible, because there is no way to load balance a serverless application.
    4. Yes. This is possible; however, some refactoring will be required.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset