Chapter 12
Serverless Compute

THE AWS CERTIFIED DEVELOPER – ASSOCIATE EXAM TOPICS COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1: Deployment
  • check mark 1.3 Prepare the application deployment package to be deployed to AWS.
  • check mark 1.4 Deploy serverless applications.
  • Domain 2: Security
  • check mark 2.1 Make authenticated calls to AWS Services.
  • Domain 3: Development with AWS Services
  • check mark 3.1 Write code for serverless applications.
  • check mark 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.
  • Domain 5: Monitoring and Troubleshooting
  • check mark 5.1 Write code that can be monitored.
  • check mark 5.2 Perform root cause analysis on faults found in testing or production.

Introduction to Serverless Compute

Serverless compute is a cloud computing execution model in which the AWS Cloud acts as the server and dynamically manages the allocation of machine resources. AWS bases the price on the amount of resources the application consumes rather than on prepurchased units of capacity.

For decades, people used local computers to interpret, process, and execute code, and they have encountered relatively few serious issues when they ran powerful web and data processing applications on their servers. However, this model has its problems.

The first issue with running servers is that you have to purchase them; a costly endeavor depending on the number of servers that you require for your project. Servers also depreciate and become obsolete, which facilitates the need to replace them.

Second, you must patch servers on a frequent and consistent manner to prevent security exploits. They require time-consuming maintenance to prolong their longevity. Servers may also experience hardware failures, which you must diagnose and repair. All of this consumes both time and money, which could be spent on other efforts such as improving applications.

Third, the needs of the users change over time. When an application first releases, it is not frequently accessed, and infrastructure needs are minimal. Over time, the application grows, and the infrastructure must also grow to accommodate it. This requires more servers, more maintenance, and more hardware costs. It also requires more time, as to add new servers to your data center can take several weeks or months.

AWS Lambda

AWS Lambda is the AWS serverless compute platform that enables you to run code without provisioning or managing servers. With AWS Lambda, you can run code for nearly any type of application or backend service—with zero administration. Only upload your code, and AWS Lambda performs all the tasks you require to run and scale your code with high availability. You can configure code to trigger automatically from other AWS services, or call it directly from any web or mobile app. AWS Lambda is sometimes referred to as a function-as-a-service (FaaS). AWS Lambda executes code whenever the function is triggered, and no Amazon Elastic Compute Cloud (Amazon EC2) instances need to be spun up in your infrastructure.

AWS Lambda offers several key benefits over Amazon EC2. First, there are no servers to manage. You are no longer responsible for provisioning or managing servers, patching servers, or worrying about high availability.

Second, you do not have to concern yourself with scaling. AWS Lambda automatically scales your application by running code in response to each trigger. Your code runs in parallel and processes each trigger individually, scaling precisely with the size of the workload.

Third, when you run Amazon EC2 instances, you are responsible for costs associated with the instance runtime. It does not matter whether your site receives little to no traffic—if the server is running, there are costs. With AWS Lambda, if no one executes the function or if the function is not triggered, no charges are incurred.

With the use of AWS Lambda and other AWS services, you can begin to decouple the application, which allows you to improve your ability to both scale horizontally and create asynchronous systems.

Where Did the Servers Go?

Serverless computing still requires servers, but the server management and capacity planning decisions are hidden from the developer or operator. You can use serverless code with code you deploy in traditional styles, such as microservices. Alternatively, you can write applications to be purely serverless with no provisioned servers.

AWS Lambda uses containerization to run your code. When your function is triggered, it creates a container. Then your code executes and returns your application or services the result. If a container is created on the first invocation, AWS refers to this as a cold start. Once the container starts to run, it remains active for several minutes before it terminates. If an invocation runs on a container that is already available, that invocation runs on a warm container.

By default, AWS Lambda runs containers inside the AWS environment, and not within your personal AWS account. However, you can also run AWS Lambda inside your Amazon Virtual Private Cloud (Amazon VPC). Figure 12.1 shows the execution flow process.

The figure shows AWS Lambda execution flow.

Figure 12.1 AWS Lambda execution flow

Monolithic vs. Microservices Architecture

Microservices are an architectural and organizational approach to software development whereby software is composed of small independent services that communicate over well-defined application programming interfaces (APIs). Small, self-contained teams own these services.

Historically, applications have been developed as monolithic architectures. With monolithic architectures, all processes are tightly coupled and run as a single service. If one process of the application experiences a spike in demand, you have to scale the entire architecture. To add or improve a monolithic application’s features, it becomes more complicated as the code base grows. This complexity limits experimentation and makes it difficult to implement new ideas. Monolithic architectures increase the risk for application availability, as many dependent and tightly coupled processes increase the impact of a single process failure.

Microservices are more agile, and scaling is more flexible than with monolithic applications. You can deploy new portions of your code faster and more easily. With AWS Lambda and other services, you can begin to create microservices for your application.

AWS Lambda Functions

This section discusses how to use AWS Lambda to execute functions, such as how to create, secure, trigger, debug, monitor, improve, and test AWS Lambda functions.

Languages AWS Lambda Supports

AWS Lambda functions currently support the following languages:

  • C# (.NET Core 1.0)
  • C# (.NET Core 2.0)
  • Go 1.x
  • Java 8
  • Node.js 4.3
  • Node.js 6.10
  • Node.js 8.10
  • Python 2.7
  • Python 3.6

Creating an AWS Lambda Function

You can use any of the following methods to access AWS services and create an AWS Lambda function that will call an AWS service:

  • AWS Management Console—graphical user interface (GUI)
  • AWS command line interface (AWS CLI)—Linux Shell and Windows PowerShell
  • AWS Software Development Kit (AWS SDK)—Java, .NET, Node.js, PHP, Python, Ruby, Go, Browser, and C++
  • AWS application programming interface (API)—send HTTP/HTTPS requests manually using API endpoints

Symbol of Note In this chapter, you will create an AWS Lambda function and properties with the AWS Management Console. In the “Exercises” section, you will use AWS CLI and the Python SDK for the AWS Lambda function.

Launch the AWS Management Console, and select the AWS Lambda service under the Compute section (see Figure 12.2).

The figure shows a screenshot illustrating how to launch the AWS Management Console, and select the AWS Lambda service under the Compute section.

Figure 12.2 AWS Management Console

When you create an AWS Lambda function, there are three options:

Author from scratch Manually create all settings and options.

Blueprints Select a preconfigured template that you can modify.

Serverless application repository Deploy a publicly shared application with the AWS Serverless Application Model (AWS SAM).

Symbol of Tip There is no charge for this repository, as it is where you deploy a prebuilt application and then modify it.

When authoring from scratch, you must provide three details to create an AWS Lambda function:

  • Name—name of the AWS Lambda function
  • Runtime—language in which the AWS Lambda function is written
  • Role—permissions of your functions

After you name the function and select a runtime language, you define an AWS Identity and Access Management (IAM) role.

Execution Methods/Invocation Models

There are two invocation models for AWS Lambda.

  • Nonstreaming Event Source (Push Model)—Amazon Echo, Amazon Simple Storage Service (Amazon S3), Amazon Simple Notification Service (Amazon SNS), and Amazon Cognito
  • Streaming Event Source (Pull Model)—Amazon Kinesis or Amazon DynamoDB stream

Additionally, you can execute an AWS Lambda function synchronously or asynchronously. The InvocationType parameter determines when to invoke an AWS Lambda function. This parameter has three possible values:

  • RequestReponse—Execute synchronously.
  • Event—Execute asynchronously.
  • DryRun—Test that the caller permits the invocation but does not execute the function.

With an event source (push model), a service such as Amazon S3 invokes the AWS Lambda function each time an event occurs with the bucket you specify.

Figure 12.3 illustrates the push model flow.

  1. You create an object in a bucket.
  2. Amazon S3 detects the object-created event.
  3. Amazon S3 invokes your AWS Lambda function according to the event source mapping in the bucket notification configuration.
  4. AWS Lambda verifies the permissions policy attached to the AWS Lambda function to ensure that Amazon S3 has the necessary permissions.
  5. AWS Lambda executes the AWS Lambda function, and the AWS Lambda function receives the event as a parameter.
The figure shows an example of Amazon S3 push model.

Figure 12.3 Amazon S3 push model

With a pull model invocation, AWS Lambda polls a stream and invokes the function upon detection of a new record on the stream. Amazon Kinesis uses the pull model.

Figure 12.4 illustrates the sequence for a pull model.

  1. A custom application writes records to an Amazon Kinesis stream.
  2. AWS Lambda continuously polls the stream and invokes the AWS Lambda function when the service detects new records on the stream. AWS Lambda knows which stream to poll and which AWS Lambda function to invoke based on the event source mapping you create in AWS Lambda.
  3. Assuming that the attached permissions policy, which allows AWS Lambda to poll the stream, is verified, then AWS Lambda executes the function.
The figure shows an example of Amazon Kinesis pull model.

Figure 12.4 Amazon Kinesis pull model

Figure 12.4 uses an Amazon Kinesis stream, but the same principle applies when you work with an Amazon DynamoDB stream.

The final way to invocate an AWS Lambda function applies to custom applications with the RequestReponse invocation type. Using this invocation method, AWS Lambda executes the function synchronously, returns the response immediately to the calling application, and alerts you to whether the invocation occurs.

Your application creates an HTTP POST request to pass the necessary parameters and invoke the function. To use this type of invocation model, you must set the RequestResponse in the X-Amz–Invocation–Type HTTP header.

Securing AWS Lambda Functions

AWS Lambda functions include two types of permissions.

Execution permissions enable the AWS Lambda function to access other AWS resources in your account. For example, if the AWS Lambda function needs access to Amazon S3 objects, you grant permissions through an AWS IAM role that AWS Lambda refers to as an execution role.

Invocation permissions are the permissions that an event source needs to communicate with your AWS Lambda function. Depending on the invocation model (push or pull), you can either update the access policy you associate with your AWS Lambda function (push) or update the execution role (pull).

AWS Lambda provides the following AWS permissions policies:

LambdaBasicExecutionRole Grants permissions only for the Amazon CloudWatch logactions to write logs. Use this policy if your AWS Lambda function does not access any other AWS resources except writing logs.

LambdaKinesisExecutionRole Grants permissions for Amazon Kinesis data stream and Amazon CloudWatch log actions. If you are writing an AWS Lambda function to process Amazon Kinesis stream events, attach this permissions policy.

LambdaDynamoDBExecutionRole Grants permissions for Amazon DynamoDB stream and Amazon CloudWatch log actions. If you are writing an AWS Lambda function to process Amazon DynamoDB stream events, attach this permissions policy.

LambdaVPCAccessExecutionRole Grants permissions for Amazon EC2 actions to manage elastic network interfaces. If you are writing an AWS Lambda function to access resources inside the Amazon VPC service, attach this permissions policy. The policy also grants permissions for Amazon CloudWatch log actions to write logs.

Inside the AWS Lambda Function

The primary purpose of AWS Lambda is to execute your code. You can use any libraries, artifacts, or compiled native binaries that execute on top of the runtime environment as part of your function code package. Because the runtime environment is a Linux-based Amazon Machine Image (AMI), always compile and test your components within the matching environment. To accomplish this, use AWS Serverless Application Model (AWS SAM) CLI to test AWS Lambda functions locally, which is also referred to as AWS SAM CLI (https://github.com/awslabs/aws-sam-cli).

Function Package

Two parts of the AWS Lambda function are considered critical: the function package and the function handler. The function code package contains everything you need to be available locally when your function is executed. At minimum, it contains your code for the function itself, but it may also contain other assets or files that your code references upon execution. This includes binaries, imports, or configuration files that your code/function needs. The maximum size of a function code package is 50 MB compressed and 250 MB extracted/decompressed.

You can create the AWS Lambda function by using the AWS Management Console, SDK, API, or with the CreateFunction API.

Use the AWS CLI to create a function with the commands, as shown here:

aws lambda create-function 
--region us-east-2 
--function-name MyCLITestFunction 
--role arn:aws:iam:account-id:role/role_name 
--runtime python3.6 
--handler MyCLITestFunction.my_handler 
--zip-file fileb://path/to/function/file.zip

Function Handler

When the AWS Lambda function is invoked, the code execution begins at the handler. The handler is a method inside the AWS Lambda function that you create and include in your package. The handler syntax depends on the language you use for the AWS Lambda function.

For Python, the handler is written as follows:

def aws lambda_handler(event, context):
        return "My First AWS Lambda Function"

For Java, it is written as follows:

MyOutput output handlerName(MyEvent event, Context context) {
        return "My First AWS Lambda Function"
    }

For Node.js, it is written as follows:

exports.handlerName = function(event, context, callback) {
        return "My First AWS Lambda Function"
    }

And for C#, it is written as follows:

myOutput HandlerName(MyEvent event, ILambdaContext context) {
        return "My First AWS Lambda Function"
    }

When the handler is specified and invoked, the code inside the handler executes. Your code can call other methods and functions within other files and classes that you store in the ZIP archive. The handler function can interact with other AWS services and make third-party API requests to web services that it might need to interact with.

Event Object

You can pass event objects that you pass into the handler function. For example, the Python function is written as follows:

def aws lambda_handler(event, context):
        return "My First AWS Lambda Function"

This first object you pass is the event object. The event includes all the data and metadata that your AWS Lambda function needs to implement the logic.

Symbol of Tip If you use the Amazon API Gateway service with the AWS Lambda function, it contains details of the HTTPS request that was made by the API client. Values, such as the path, query string, and the request body, are within the event object. The event object has different data depending on the event that it creates. For example, Amazon S3 has different values inside the event object than the Amazon API Gateway service.

Context Object

The second object that you pass to the handler is the context object. The context object contains data about the AWS Lambda function invocation itself. The context and structure of the object vary based on the AWS Lambda function language. There are three primary data points that the context object contains.

AWS Requestid Tracks specific invocations of an AWS Lambda function, and it is important for error reports or when you need to contact AWS Support.

Remaining time Amount of time in milliseconds that remain before your function timeout occurs. AWS Lambda functions can run a maximum of 300 seconds (5 minutes) as of this writing, but you can configure a shorter timeout.

Logging Each language runtime provides the ability to stream log statements to Amazon CloudWatch Logs. The context object contains information about which Amazon CloudWatch Log stream your log statements are sent to.

Configuring the AWS Lambda Function

This section details how to configure the AWS Lambda functions.

Descriptions and Tags

An AWS best practice is to tag and give descriptions of your resources. As you start to scale services and create more resources on the AWS Cloud, identifying resources becomes a challenge if you do not implement a tagging strategy.

Memory

After you write your AWS Lambda function code, configure the function options. The first parameter is function memory. For each AWS Lambda function, increase or decrease the function resources (amount of random access memory). You can allocate 128 MB of RAM up to 3008 MB of RAM in 64-MB increments. This dictates the amount of memory available to your function when it executes and influences the central processing unit (CPU) and network resources available to your function.

Timeout

When you write your code, you must also configure how long your function executes for before a timeout is returned. The default timeout value is 3 seconds; however, you can specify a maximum of 300 seconds (5 minutes), the longest timeout value. You should not automatically set this function for the maximum value for your AWS Lambda function, as AWS charges based on execution time in 100-ms increments. If you have a function that fails quickly, you spend less money, because you do not wait a full 5 minutes to fail. If you wait on an external dependency that fails or you have programmed code incorrectly in your function, AWS Lambda processes the error for 5 minutes, or you can set it to fail within a fraction of that time to save time, cost, and resources.

After the execution of an AWS Lambda function completes or a timeout occurs, the response returns and all execution ceases. This includes any processes, subprocesses, or asynchronous process that your AWS Lambda function may have spawned during its execution.

Network Configuration

There are two ways to integrate your AWS Lambda functions with external dependencies (other AWS services, publicly hosted web services, and such) with an outbound network connection: default network configuration and Amazon VPC.

With the default network configuration, your AWS Lambda function communicates from inside an Amazon VPC that AWS Lambda manages. The AWS Lambda function can connect to the internet, but not to any privately deployed resources that run within your own VPCs, such as Amazon EC2 servers.

Your AWS Lambda function uses an Amazon VPC network configuration to communicate through an elastic network interface (NIC). This interface is provisioned within the Amazon VPC and subnets, which you choose within your own account. You can assign NIC to security groups, and traffic routes based on the route tables of the subnets where you place the NIC with the Amazon EC2 service.

If your AWS Lambda function does not need to connect to any privately deployed resources, such as an Amazon EC2, select the default networking option, as the VPC option requires you to manage more details when implementing an AWS Lambda function. These details include the following:

  • Select an appropriate number of subnets, while you keep in mind the principles of high availability and Availability Zones.
  • Allocate enough IP addresses for each subnet.
  • Implement an Amazon VPC network design that permits your AWS Lambda function to have the correct connectivity and security to meet your requirements.
  • Increase the AWS Lambda cold start times if your invocation pattern requires a new NIC to create just in time.
  • Configure a network address translation (NAT) (instance or gateway) to enable outbound internet access.

If you deploy an AWS Lambda function with access to your Amazon VPC, use the following formula to estimate the NIC capacity:

Projected peak concurrent executions * (Memory in GB / 3GB)

Symbol of Tip If you had a peak of 400 concurrent executions, use 512 MB of memory. This results in about 68 network interfaces. You therefore need an Amazon VPC with at least 68 IP addresses available. This provides a /25 network that includes 128 IP addresses, minus the five that AWS uses. Next, you subtract the AWS addresses from the /25 network, which gives you 123 IP addresses.

AWS Lambda easily integrates with AWS CloudTrail, which records and delivers log files to your Amazon S3 bucket to monitor API usage inside your account.

Concurrency

Though AWS allows you to scale infinitely, AWS recommends that you fine-tune your concurrency options. By default, the account-level concurrency within a given region is set with 1,000 functions as a maximum to provide you 1,000 concurrent functions to execute. You can request a limit increase for concurrent executions from the AWS Support Center.

To view the account-level setting, use the GetAccountSettings API and view the AccountLimit object and the ConcurrentExecutions element.

For example, run this command in the AWS CLI:

aws lambda get-account-settings

This returns the following:

{
   "AccountLimit": {
      "CodeSizeUnzipped": number,
      "CodeSizeZipped": number,
      "ConcurrentExecutions": number,
      "TotalCodeSize": number,
      "UnreservedConcurrentExecutions": number
   },
   "AccountUsage": {
      "FunctionCount": number,
      "TotalCodeSize": number
   }
}

Concurrency Limits

Set a function-level concurrent execution limit. By default, the concurrent execution limit is enforced against the sum of the concurrent executions of all functions. The shared concurrent execution pool is referred to as the unreserved concurrency allocation. If you have not set up any function-level concurrency limit, the unreserved concurrency limit is the same as the account level concurrency limit. Any increases to the account-level limit will have a corresponding increase in the unreserved concurrency limit.

You can optionally set the concurrent execution limit for a function. Here are some examples:

  • The default behavior is described as a surge of concurrent executions in one function, preventing the function you have isolated with an execution limit from being throttled. By setting a concurrent execution limit on a function, you reserve the specified concurrent execution value for that function.
  • Functions scale automatically based on the incoming request rate, but not all resources in your architecture may be able to do so. For example, relational databases have limits on how many concurrent connections they can handle. You can set the concurrent execution limit for a function to align with the values of its downstream resources support.
  • If your function connects to an Amazon VPC based resource, each concurrent execution consumes one IP within the assigned subnet. You can set the concurrent execution limit for a function to match the subnet size limits.
  • If you need a function to stop processing any invocations, set the concurrency to 0 and then throttle all incoming executions.

Symbol of Tip By setting a concurrency limit on a function, AWS Lambda ensures that the allocation applies individually to that function, regardless of the number of traffic-processing remaining functions. If that limit is exceeded, the function is throttled. How that function behaves when throttled depends on the event source.

Dead Letter Queues

All applications and services experience failure. Reasons that an AWS Lambda function can fail include (but are not limited to) the following:

  • Function times out while trying to reach an endpoint
  • Function fails to parse input data successfully
  • Function experiences resource constraints, such as out-of-memory errors or other timeouts

If any of these failures occur, your function generates an exception, which you handle with a dead letter queue (DLQ). A DLQ is either an Amazon Simple Notification Service (Amazon SNS) topic or an Amazon Simple Queue Service (Amazon SQS) queue, which you configure as the destination for all failed invocation events. If a failure event occurs, the DLQ retains the message that failed, analyzes it further, and reprocesses it if necessary.

For asynchronous event sources (InvocationType is a declared event), after two retries with automatic back-off between the retries, the event enters the DLQ, and you configure it as either an Amazon SNS topic or Amazon SQS queue.

After you enable DLQ on an AWS Lambda function, an Amazon CloudWatch metric (DeadLetterErrors) is available. The metric increments whenever the dead letter message payload cannot be sent to the DLQ at any time.

Environment Variables

AWS recommends that you separate code and configuration settings. Use environment variables for configuration settings. Environment variables are key-value pairs that you create and modify as part of your function configuration. These key-value pairs pass variables to your AWS Lambda function at execution time.

By default, environment variables are encrypted at rest, using a default KMS key of aws/lambda. Examples of environment variables that you can store include database (DB) connection strings and the type of environment (PROD, DEV, TEST, and such).

Versioning

You can publish one or more versions and aliases for your AWS Lambda functions. Versioning is an important feature to develop serverless compute architectures, as it allows you to create multiple versions without affecting what is currently deployed in the production environment. Each AWS Lambda function version has a unique Amazon Resource Name (ARN). After you publish a version, it is immutable, and you cannot change it.

After you create an AWS Lambda function, you can publish a version of that function. Here’s an example with the AWS CLI:

aws lambda publish-version 
--region region 
--function-name myCoolFunction 
--profile devuser

This returns the version number along with other details after the command executes.

{
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "Sha265Hash",
    "FunctionName": "myCoolFunction",
    "CodeSize": 218,
    "MemorySize": 128,
    "FunctionArn": "arn:aws:lambda:region:account:function:myCoolFunction:1",
    "Version": "1",
    "Role": "arn:aws:iam::account:role/service-role/lambda-basic",
    "Timeout": 3,
    "LastModified": "2018-05-11T14:59:47.753+0000",
    "Handler":lambda_function.lambda_handler",
    "Runtime": "python3.6",
    "Description": ""
}

Creating an Alias

After you create a version of an AWS Lambda function, you could use that version number in the ARN to reference that exact version of the function. However, if you release an update to the AWS Lambda function, you must then locate all the places where you call that ARN inside the application and change the ARN to the new version number. Instead, assign an alias to a particular version and use that alias in the application.

Assign an alias of PROD to the newly created version 1, and use the alias version of the ARN in the application. This way, you change the AWS Lambda function without affecting the production environment. When you are ready to move the function to production for the next version, reassign the alias to a different version, as you can reassign the alias while the version numbers remain static. To create an alias with the AWS CLI, use this:

aws lambda create-alias 
--region region 
--function-name myCoolFunction 
--description "My Alias for Production" 
--function-version "1" 
--name PROD 
--profile devuser

Now point applications to the PROD alias for the AWS Lambda function. This allows you to modify the function and improve code without affecting production. To migrate the PROD alias to the next version, run the command where 4 is the example version.

aws lambda update-alias 
--region region 
--function-name myCoolFunction 
--function-version 4 
--name PROD 
--profile devuser

The production system points to version 4 of the AWS Lambda function. As you can see with versioning and aliases, you can continue to innovate your service without affecting the current production systems.

Invoking AWS Lambda Functions

There are many ways to invoke an AWS Lambda function. You can use the push or pull method, use a custom application, or use a schedule and event to run an AWS Lambda trigger. AWS Lambda supports the following AWS services as event sources:

  • Amazon S3
  • Amazon DynamoDB
  • Amazon Kinesis Data Streams
  • Amazon SNS
  • Amazon Simple Email Service
  • Amazon Cognito
  • AWS CloudFormation
  • Amazon CloudWatch Logs
  • Amazon CloudWatch Events
  • AWS CodeCommit
  • Scheduled events (powered by Amazon CloudWatch Events)
  • AWS Config
  • Amazon Alexa
  • Amazon Lex
  • Amazon API Gateway
  • AWS IoT Button
  • Amazon CloudFront
  • Amazon Kinesis Data Firehose
  • Manually invoking a Lambda function on demand

Monitoring AWS Lambda Functions

As with all AWS services and all applications, it is critical to monitor your environment and application. With AWS Lambda, there are two primary tools to monitor functions to ensure that they are running correctly and efficiently: Amazon CloudWatch and AWS X-Ray.

Using Amazon CloudWatch

Amazon CloudWatch monitors AWS Lambda functions. By default, AWS Lambda enables these metrics: invocation count, invocation duration, invocation errors, throttled invocations, iterator age, and DLQ errors.

You can leverage the reported metrics to set CloudWatch custom alarms. You can create a CloudWatch alarm that watches a single CloudWatch metric. The alarm performs one or more actions based on the value of the metric. The action can be an Amazon EC2 action, an Amazon EC2 Auto Scaling action, or a notification sent to an Amazon SNS topic.

The AWS Lambda namespace includes the metrics shown in Table 12.1.

Table 12.1 AWS Lambda Amazon CloudWatch Metrics

Metric Description
Invocations

Measures the number of times a function is invoked in response to an event or invocation API call.

Replaces the deprecated RequestCount metric.

Includes successful and failed invocations but does not include throttled attempts. This equals the billed requests for the function.

AWS Lambda sends these metrics to CloudWatch only if they have a nonzero value.

Units: Count

Errors Measures the number of invocations that failed as the result of errors in the function (response code 4XX). Replaces the deprecated ErrorCount metric. Failed invocations may trigger a retry attempt that succeeds. This includes the following:
  • Handled exceptions (for example, context.fail(error))
  • Unhandled exceptions causing the code to exit
  • Out-of-memory exceptions
  • Timeouts
  • Permissions errors
This does not include invocations that fail because invocation rates exceeded default concurrent limits (error code 429) or failures resulting from internal service errors (error code 500). Units: Count
DeadLetterErrors Incremented when AWS Lambda is unable to write the failed event payload to DLQs that you configure. This could be because of the following:
  • Permissions errors
  • Throttles from downstream services
  • Misconfigured resources
  • Timeouts
Units: Count

Using AWS X-Ray

AWS X-Ray is a service that collects data about requests that your application serves, and it provides tools to view, filter, and gain insights into that data to identify issues and opportunities for optimization. For any traced request to the application, information displays about the request and response, but also about calls that the application makes to downstream AWS resources, microservices, databases, and HTTP web APIs.

There are three main parts to the X-Ray service:

  • Application code runs and uses the AWS X-Ray SDK (Node.js, Java, and .NET, Ruby, Python, and Go).
  • AWS X-Ray daemon is an application that listens for traffic on User Datagram Protocol (UDP) port 2000, gathers raw segment data, and relays it to the AWS X-Ray API.
  • AWS X-Ray displays in the AWS Management Console.

With the AWS SDK, you integrate X-Ray into the application code. The AWS SDK records data about incoming and outgoing requests and sends it to the X-Ray daemon, which relays the data in batches to X-Ray. For example, when your application calls Amazon DynamoDB to retrieve user information from an Amazon DynamoDB table, the X-Ray SDK records data from both the client request and the downstream call to Amazon DynamoDB.

When the SDK sends data to the X-Ray daemon, the SDK sends JSON segment documents to a daemon process listening for UDP traffic. The X-Ray daemon buffers segments in a queue and uploads them to X-Ray in batches. The X-Ray daemon is available for Linux, Windows, and macOS, and it is included on both AWS Elastic Beanstalk and AWS Lambda platforms.

When the daemon sends the data to X-Ray, X-Ray uses trace data from the AWS resources that power the cloud applications to generate a detailed service graph. The service graph shows the client, your frontend service, and backend services that your frontend service calls to process requests and persist data. Use the service graph to identify bottlenecks, latency spikes, and other issues to improve the performance of your applications.

With X-Ray and the service map, as shown in Figure 12.5, you can visualize how your application is running and troubleshoot any errors.

The figure shows AWS X-Ray service map.

Figure 12.5 AWS X-Ray service map

Summary

In this chapter, you learned about serverless compute, explored what it means to use a serverless service, and took an in-depth look at AWS Lambda. With AWS Lambda, you learned how to create a Lambda function with the AWS Management Console and AWS CLI and how to scale Lambda functions by specifying appropriate memory allocation settings and properly defining timeout values. Additionally, you took a closer look at the Lambda function handler, the event object, and the context object to use data from an event source with AWS Lambda. Finally, you looked at how to invoke Lambda functions by using both the push and pull models and monitor functions. We wrapped up the chapter with a brief look at Amazon CloudWatch and AWS X-Ray.

Exam Essentials

Know how to use execution context for reuse. Take advantage of execution context reuse to improve the performance of your AWS Lambda function. Verify that any externalized configuration or dependencies that your code retrieves are stored and referenced locally after initial execution. Limit the re-initialization of variables or objects on every invocation. Instead, use static initialization/constructor, global/static variables, and singletons. Keep connections (HTTP or database) active, and reuse any that were established during a previous invocation.

Know how to use environmental variables. Use environment variables to pass operational parameters to your AWS Lambda function. For example, if you are writing to an Amazon S3 bucket, instead of hardcoding the bucket name to which you are writing, configure the bucket name as an environment variable.

Know how to control the dependencies in your function’s deployment package. The AWS Lambda execution environment contains libraries, such as the AWS SDK, for the Node.js and Python runtimes. To enable the latest set of features and security updates, AWS Lambda periodically updates these libraries. These updates may introduce subtle changes to the behavior of your AWS Lambda function. Package all your dependencies with your deployment package to have full control of the dependencies that your function uses.

Know how to minimize your deployment package size to its runtime necessities. Minimizing your deployment package size reduces the amount of time that it takes for your deployment package to download and unpack ahead of invocation. For functions authored in Java or .NET Core, it is best to not upload the entire AWS SDK library as part of your deployment package. Instead, select only the modules that include components of the SDK you need, such as Amazon DynamoDB, Amazon S3 SDK modules, and AWS Lambda core libraries.

Know how memory works. Performing AWS Lambda function tests is a crucial step to ensure that you choose the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU that is available to your function. The memory usage for your function is determined per invocation, and it displays in the Amazon CloudWatch Logs.

Know how to load test your AWS Lambda function to determine an optimum timeout value. It is essential to analyze how long your function runs to determine any problems with a dependency service. Dependency services may increase the concurrency of the function beyond what you expect. This is especially important when your AWS Lambda function makes network calls to resources that may not handle AWS Lambda’s scaling.

Know how permissions for IAM policies work. Use the most-restrictive permissions when you set AWS IAM policies. Understand the resources and operations that your AWS Lambda function needs, and limit the execution role to these permissions.

Know how to use AWS Lambda metrics and Amazon CloudWatch alarms. Use AWS Lambda metrics and Amazon CloudWatch alarms (instead of creating or updating a metric from within your AWS Lambda function code). This is a much more efficient way to track the health of your AWS Lambda functions, and it allows you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your AWS Lambda function execution time to address any bottlenecks or latencies attributable to your function code.

Know how to capture application errors. Leverage your log library and AWS Lambda metrics and dimensions to catch application errors, such as ERR, ERROR, and WARNING.

Know how to create and use dead letter queues (DLQs). Create and use DLQs to address and replay asynchronous function errors.

Resources to Review

Exercises

Symbol of Note To complete these exercises, download the AWS Certified Developer – Associate Exam code examples for Chapter 12, Chapter12_Code.zip, from Resources at http://www.wiley.com/go/sybextestprep.

For these exercises, you are a developer for a shoe company. The shoe company has a third-party check processor who sends checks, pay stubs, and direct deposits to the shoe company’s employees. The third-party service requires a JSON document with the employee’s name, the number of hours they worked for the current week, and the employee’s hourly rate. Unfortunately, the shoe company’s payroll system exports this data only in CSV format. Devise a serverless method to convert the exported CSV file to JSON.

Exercise 12.1

Create an Amazon S3 Bucket for CSV Ingestion

To solve this, create two Amazon S3 buckets (CSV ingestion and JSON output) and an AWS Lambda function to process the file.

After you export the CSV file, upload the file to Amazon S3. First, create an Amazon S3 bucket with the following Python code:

    import boto3
# Variables for the bucket name and the region we will be using.
# Important Note: s3 Buckets are globally unique, as such you need to change the name of the bucket to something else.
# Important Note: If you would like to use us-east-1 as the region, when making the s3.create_bucket call, then do not specify any region.
bucketName = "shoe-company-2018-ingestion-csv-demo"
bucketRegion = "us-west-1"
 
# Creates an s3 Resource; this is a higher level API type service for s3.
s3 = boto3.resource('s3')
 
# Creates a bucket
bucket = s3.create_bucket(ACL='private',Bucket=bucketName,CreateBucketConfiguration ={'LocationConstraint': bucketRegion})
 

This Python code creates a resource for interacting with the Amazon S3 service. After the resource is created, you can call the function .create_bucket to create a bucket.

After executing this Python code, verify that the bucket has been successfully created inside the Amazon S3 console. If it is not successfully created, the most likely cause is that the bucket name is not unique; therefore, renaming the bucket should solve the issue.

Exercise 12.2

Create an Amazon S3 Bucket for Final Output JSON

To create the second bucket for final output, run the following:

    import boto3
# Variables for the bucket name and the region we will be using.
# Important Note: s3 Buckets are globally unique, as such you need to change the name of the bucket to something else.
    # Important Note: If you would like to use us-east-1 as the region, when making the s3.create_bucket call, then do not specify any region.
bucketName = "shoe-company-2018-final-json-demo"
bucketRegion = "us-west-1"
 
# Creates an s3 Resource; this is a higher level API type service for s3.
s3 = boto3.resource('s3')
 
# Creates a bucket
bucket = s3.create_bucket(ACL='private',Bucket=bucketName,CreateBucketConfiguration={'LocationConstraint': bucketRegion})
 

In the first exercise, you created the initial bucket for ingestion of the .csv file. This bucket will be used for the final JSON output. Again, if you see any errors here, look at the error logs. Verify that the bucket exists inside the Amazon S3 console. You will verify the buckets programmatically in the next exercise.

Exercise 12.3

Verify List Buckets

To verify the two buckets, use the Python 3 SDK and run the following:

    import boto3
# Variables for the bucket name and the region we will be using.
# Important Note: Be sure to use the same bucket names you used in the previous two exercises.
bucketInputName = "shoe-company-2018-ingestion-csv-demo"
bucketOutputName = "shoe-company-2018-final-json-demo"
bucketRegion = "us-west-1"
 
# Creates an s3 Resource; this is a higher level API type service for s3.
s3 = boto3.resource('s3')
 
# Get all of the buckets
bucket_iterator = s3.buckets.all()
 
# Loop through the buckets
 
for bucket in bucket_iterator:
    if bucket.name == bucketInputName:
        print("Found the input bucket	:	", bucket.name)
    if bucket.name == bucketOutputName:
        print("Found the output bucket	:	", bucket.name)

Here, you are looping through the buckets, and if the two that you created are found, they are displayed. If everything is successful, then you should see output similar to the following:

Found the output bucket	: shoe-company-2018-final-json-demo
Found the input bucket	: shoe-company-2018-ingestion-csv-demo

Exercise 12.4

Prepare the AWS Lambda Function

To perform the conversion using the AWS CLI and the Python SDK, create the AWS Lambda function. The AWS CLI creates the AWS Lambda function. The Python SDK processes the files inside the AWS Lambda service.

In the following code, change the bucket names to bucket names you defined. The lambda_handler function passes the event parameter. This allows you to acquire the Amazon S3 bucket name.

Save this code to a file called lambda_function.py, and then compress the file.

Symbol of Note You can use a descriptive file name; however, remember to update the handler code in Exercise 12.6.

import boto3
import csv
import json
import time
# The csv and json modules provide functionality for parsing
# and writing csv/json files. We can use these modules to
# quickly perform a data transformation
# You can read about the csv module here:
# https://docs.python.org/2/library/csv.html
# and JSON here:
# https://docs.python.org/2/library/json.html
 
# Create an s3 Resource: https://boto3.readthedocs.io/en/latest/guide/resources.html
s3 = boto3.resource('s3')
csv_local_file = '/tmp/input-payroll-data.csv'
json_local_file = '/tmp/output-payroll-data.json'
 
# Change this value to whatever you named the output s3 bucket in the previous exercise
output_s3_bucket = 'shoe-company-2018-final-json-demo'
 
def lambda_handler(event, context):
 
    # Need to get the bucket name
    bucket_name = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']
 
    # Download the file to our AWS Lambda container environment
    try:
        s3.Bucket(bucket_name).download_file(key, csv_local_file)
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket_name))
        raise e
 
    # Open the csv and json files
    csv_file = open(csv_local_file, 'r')
    json_file = open(json_local_file, 'w')
 
    # Get a csv DictReader object to convert file to json
    dict_reader = csv.DictReader( csv_file )
 
    # Create an Employees array for JSON, use json.dumps to pass in the string
    json_conversion = json.dumps({'Employees': [row for row in dict_reader]})
 
    # Write to our json file
    json_file.write(json_conversion)
 
        # Close out the files
    csv_file.close()
    json_file.close()
 
    # Upload finished file to s3 bucket
    try:
        s3.Bucket(output_s3_bucket).upload_file(json_local_file, 'final-output-payroll.json')
    except Exception as e:
        print(e)
        print('Error uploading object {} to bucket {}. Make sure the file paths are correct.'.format(key, bucket_name))
        raise e
 
    print('Payroll processing completed at: ', time.asctime( time.localtime(time.time()) ) )
    return 'Payroll conversion from CSV to JSON complete.'

After you create the code for the function, upload this file to Amazon S3 with the AWS CLI. This saves the code locally on your desktop and runs the following command. Be sure to compress the file.

aws s3 cp lambda_function.zip s3://shoe-company-2018-ingestion-csv-demo

If the following command successfully executed, you should see something similar to the following printed to the console:

upload: .lambda_function.zip to s3://shoe-company-2018-ingestion-csv-demo/lambda_function.zip

You may also verify that the file has been uploaded by using the AWS Management Console inside the Amazon S3 service.

Exercise 12.5

Create AWS IAM Roles

In this exercise, create an AWS IAM role so that the AWS Lambda function has the correct permissions to execute the function with the AWS CLI. Create a JSON file of the trust relationship, which allows the AWS Lambda service to assume this particular IAM role with the Security Token Service.

Also create a policy document. A predefined policy document was distributed in the code example that you downloaded in Exercise 12.1. However, if you prefer to create the file manually, you can do so. The following is required for the exercise to work correctly:

lambda-trust-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

After the lambda-trust-policy.json document has been created, run the following command to create the IAM role:

aws iam create-role -–role-name PayrollProcessingLambdaRole --description "Provides AWS Lambda with access to s3 and cloudwatch to execute the PayrollProcessing function" --assume-role-policy-document file://lambda-trust-policy.json

A JSON object returns. Copy the RoleName and ARN roles for the next steps.

{
    "Role": {
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Action": "sts:AssumeRole",
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    }
                }
            ]
        },
        "RoleId": "roleidnumber",
        "CreateDate": "2018-05-19T17:30:05.020Z",
        "RoleName": "PayrollProcessingLambdaRole",
        "Path": "/",
        "Arn": "arn:aws:iam::accountnumber:role/PayrollProcessingLambdaRole"
    }
}
  • After you create an AWS role, attach a policy to the role. There are two types of AWS policies: AWS managed and customer managed. AWS creates predefined policies that you can use called AWS managed policies. You may create customer managed policies specific to your requirements.
  • For this example, you will use an AWS managed policy built for AWS Lambda called AWSLambdaExecute. This provides AWS Lambda access to Amazon CloudWatch Logs and Amazon S3 GetObject and PutObject API calls.
aws iam attach-role-policy --role-name PayrollProcessingLambdaRole --policy-arn arn:aws:iam::aws:policy/AWSLambdaExecute

If this command successfully executes, it does not return results. To verify that the IAM role has been properly configured, from the AWS Management Console, go to the IAM service. Click Roles, and search for PayrollProcessingLambdaRole. On the Permissions tab, verify that the AWSLambdaExecute policy has been attached. On the Trust relationships tab, verify that the trusted entities states the following: “The identify provider(s) lambda.amazonaws.com.”

You have successfully uploaded the Python code that has been compressed to Amazon S3, and you created an IAM role. In the next exercise, you will create the AWS Lambda function.

Exercise 12.6

Create the AWS Lambda Function

In this exercise, create the AWS Lambda function. You can view the AWS Lambda API reference here:

https://docs.aws.amazon.com/cli/latest/reference/lambda/index.html

Symbol of Note For the --handler parameter, make sure that you specify the name of the .py file you created in Exercise 12.4. Also, make sure that for the S3Key parameter, you specify the name of the compressed file inside the Amazon S3 bucket.

Run this AWS CLI command:

aws lambda create-function --function-name PayrollProcessing --runtime python3.7 --role arn:aws:iam::accountnumber:role/PayrollProcessingLambdaRole --handler lambda_function.lambda_handler --description "Converts Payroll CSVs to JSON and puts the results in an s3 bucket." --timeout 3 --memory-size 128 --code S3Bucket=shoe-company-2018-ingestion-csv-demo,S3Key=lambda_function.zip --tags  Environment="Production",Application="Payroll" --region us-west-1

If the command was successful, you receive a JSON response similar to the following:

{
    "FunctionName": "PayrollProcessing",
    "FunctionArn": "arn:aws:lambda:us-east-2:accountnumber:function: PayrollProcessing",
    "Runtime": "python3.7",
    "Role": "arn:aws:iam::accountnumber:role/PayrollProcessingLambdaRole",
    "Handler": "payroll.lambda_handler",
    "CodeSize": 1123,
    "Description": "Converts Payroll CSVs to JSON and puts the results in an s3 bucket.",
    "Timeout": 3,
    "MemorySize": 128,
    "LastModified": "2018-12-10T06:36:27.990+0000",
    "CodeSha256": "NUKm2kp/fLzVr58t8XCTw6YGBmxR2E1Q9MHuW11QXfw=",
    "Version": "$LATEST",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "RevisionId": "ae30524f-26a9-426a-b43a-efa522cb1545"
}

You have successfully created the AWS Lambda function. You can verify this from the AWS Management Console and opening the AWS Lambda console.

Exercise 12.7

Give Amazon S3 Permission to Invoke an AWS Lambda Function

In this exercise, use the AWS Lambda CLI add-permission command to invoke the AWS Lambda function.

aws lambda add-permission --function-name PayrollProcessing --statement-id lambdas3permission --action lambda:InvokeFunction --principal s3.amazonaws.com --source-arn arn:aws:s3:::shoe-company-2018-ingestion-csv-demo --source-account yourawsaccountnumber --region us-west-1

After you run this command and it is successful, you should receive a JSON response that looks similar to the following:

{
    "Statement": {
        "Sid": "lambdas3permission",
        "Effect": "Allow",
        "Principal": {
            "Service": "s3.amazonaws.com"
        },
        "Action": "lambda:InvokeFunction",
        "Resource": "arn:aws:lambda:us-east-2:accountnumber:function:PayrollProcessing",
        "Condition": {
            "StringEquals": {
                "AWS:SourceAccount": "accountnumber"
            },
            "ArnLike": {
                "AWS:SourceArn": "arn:aws:s3:::shoe-company-2018-ingestion-csv-demo"
            }
        }
    }
}

This provides a function policy to the AWS Lambda function that allows the S3 bucket that you created to call the action lambda:InvokeFunction. You can verify this by navigating to the AWS Lambda service inside the AWS Management Console. In the Designer section, click the key icon to view permissions, and under Function policy, you will see the policy you just created.

Exercise 12.8

Add the Amazon S3 Event Trigger

In this exercise, add the trigger for Amazon S3 using AWS CLI for the s3api commands. The notification-config.json file was provided in the exercise files. Its contents are as follows:

{
    "LambdaFunctionConfigurations": [
        {
            "Id": "s3PayrollFunctionObjectCreation",
            "LambdaFunctionArn": "arn:aws:lambda:us-west-1:accountnumber:function:PayrollProcessing",
            "Events": [
                "s3:ObjectCreated:*"
            ],
            "Filter": {
                "Key": {
                    "FilterRules": [
                        {
                            "Name": "suffix",
                            "Value": ".csv"
                        }
                    ]
                }
            }
        }
    ]
}
aws s3api put-bucket-notification-configuration ––bucket shoe-company-2018-ingestion-csv-demo ––notification-configuration file://notification-config.json

If the execution is successful, no response is sent. To verify that the trigger has been added to the AWS Lambda function, navigate to the AWS Lambda console inside the AWS Management Console, and verify that there is now an Amazon S3 trigger.

Exercise 12.9

Test the AWS Lambda Function

To test the AWS Lambda function, use the AWS CLI to upload the CSV file to the Amazon S3 bucket; then check whether the function transforms the data and puts the result file in the output bucket.

aws s3 cp input-payroll-data.csv s3://shoe-company-2018-ingestion-csv-demo

If everything executes successfully, in the output bucket that you created, you should see the transformed JSON file. You accepted input into one Amazon S3 bucket as a .csv, transformed it to serverless by using AWS Lambda, and then stored the resulting .json file in a separate Amazon S3 bucket. If you do not see the file, retrace your steps through the exercises. It is a good idea to view the Amazon CloudWatch Logs, which can be found on the Monitoring tab in the AWS Lambda console. This way, you can determine whether there are any errors.

Review Questions

  1. A company currently uses a serverless web application stack, which consists of Amazon API Gateway, Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, and AWS Lambda. They would like to make improvements to their AWS Lambda functions but do not want to impact their production functions.

    How can they accomplish this?

    1. Create new AWS Lambda functions with a different name, and update resources to point to the new functions when they are ready to test.
    2. Copy their AWS Lambda function to a new region where they can update their resources to the new region when ready.
    3. Create a new AWS account, and re-create all their serverless infrastructure for their application testing.
    4. Publish the current version of their AWS Lambda function, and create an alias as PROD. Then, assign PROD to the current version number, update resources with the PROD alias ARN, and create a new version of the updated AWS Lambda function and assign an alias of $DEV.
  2. What is the maximum amount of memory that you can assign an AWS Lambda function?

    1. AWS runs the AWS Lambda function; it is a managed service, so you do not need to configure memory settings.
    2. 3008 MB
    3. 1000 MB
    4. 9008 MB
  3. What is the default timeout value for an AWS Lambda function?

    1. 3 seconds
    2. 10 seconds
    3. 15 seconds
    4. 25 seconds
  4. A company uses a third-party service to send checks to its employees for payroll. The company is required to send the third-party service a JSON file with the person’s name and the check amount. The company’s internal payroll application supports exporting only to CSVs, and it currently has cron jobs set up on their internal network to process these files. The server that is processing the data is aging, and the company is concerned that it might fail in the future. It is also looking to have the AWS services perform the payroll function.

    What would be the best serverless option to accomplish this goal?

    1. Create an Amazon Elastic Compute Cloud (Amazon EC2) and the necessary cron job to process the file from CSV to JSON.
    2. Use AWS Import/Export to create a virtual machine (VM) image of the on-premises server and upload the Amazon Machine Images (AMI) to AWS.
    3. Use AWS Lambda to process the file with Amazon Simple Storage Service (Amazon S3).
    4. There is no way to process this file with AWS.
  5. What is the maximum execution time allowed for an AWS Lambda function?

    1. 60 seconds
    2. 120 seconds
    3. 230 seconds
    4. 300 seconds
  6. Which language is not supported for AWS Lambda functions?

    1. Ruby
    2. Python 3.6
    3. Node.js
    4. C# (.NET Core)
  7. How can you increase the limit of AWS Lambda concurrent executions?

    1. Use the Support Center page in the AWS Management Console to open a case and send a Server Limit Increase request.
    2. AWS Lambda does not have any limits for concurrent executions.
    3. Send an email to with the subject “AWS Lambda Increase.”
    4. You cannot increase concurrent executions for AWS Lambda.
  8. A company is receiving permission denied after its AWS Lambda function is invoked and executes and has a valid trust policy. After investigating, the company realizes that its AWS Lambda function does not have access to download objects from Amazon Simple Storage Service (Amazon S3).

    Which type of policy do you need to correct to give access to the AWS Lambda function?

    1. Function policy
    2. Trust policy
    3. Execution policy
    4. None of the above
  9. A company wants to be able to send event payloads to an Amazon Simple Queue Service (Amazon SQS) queue if the AWS Lambda function fails.

    Which of the following configuration options does the company need to be able to do this in AWS Lambda?

    1. Enable a dead-letter queue.
    2. Define an Amazon Virtual Private Cloud (Amazon VPC) network.
    3. Enable concurrency.
    4. AWS Lambda does not support such a feature.
  10. A company wants to be able to pass configuration settings as variables to their AWS Lambda function at execution time.

    Which feature should the company use?

    1. Dead-letter queues
    2. AWS Lambda does not support such a feature.
    3. Environment variables
    4. None of the above
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset