Load testing your serverless microservice

First, you need to have a serverless microservice stack running with ./build-package-deploy-lambda-dynamo-data-api.sh, and have loaded data into the DynamoDB table using the python3 dynamo_insert_items_from_file.py Python script.

Then install Locust, if it hasn't already been installed with the other packages in requirements.txt:

$ sudo pip3 install locustio 

Locust (https://docs.locust.io) is an easy-to-use load-testing tool with a web metrics and monitoring interface. It allows you to define user behavior using Python code and can be used to simulate millions of users over multiple machines.

To use Locust, you first need to create a Locust Python file where you define the Locust tasks. The HttpLocust class adds a client attribute that is used to make the HTTP request. A TaskSet class defines a set of tasks that a Locust user will execute. The @task decorator declares the tasks for TaskSet:

import random
from locust import HttpLocust, TaskSet, task

paths = ["/Prod/visits/324?startDate=20171014",
"/Prod/visits/324",
"/Prod/visits/320"]


class SimpleLocustTest(TaskSet):

@task
def get_something(self):
index = random.randint(0, len(paths) - 1)
self.client.get(paths[index])


class LocustTests(HttpLocust):
task_set = SimpleLocustTest

To test the GET method with different resources and parameters, we are selecting three different paths randomly from a paths list, where one of the IDs does not exist in DynamoDB. The main idea is that we could easily scale this out to simulate millions of different queries if we had loaded their corresponding rows from a file into DynamoDB. Locust supports much more complex behaviors, including processing responses, simulating user logins, sequencing, and event hooks, but this script is a good start.

To run Locust, we need to get the API Gateway ID, which looks like abcdefgh12, to create the full hostname used for load testing. Here, I wrote a Python script called serverless-microservice-data-api/bash/apigateway-lambda-dynamodbget_apigateway_id.py that can do so based on the API name:

import argparse
import logging

import boto3
logging.getLogger('botocore').setLevel(logging.CRITICAL)

logger = logging.getLogger(__name__)
logging.basicConfig(format='%(asctime)s %(levelname)s %(name)-15s: %(lineno)d %(message)s',
level=logging.INFO)
logger.setLevel(logging.INFO)


def get_apigateway_id(endpoint_name):
client = boto3.client(service_name='apigateway',
region_name='eu-west-1')
apis = client.get_rest_apis()
for api in apis['items']:
if api['name'] == endpoint_name:
return api['id']
return None

def main():
endpoint_name = "lambda-dynamo-xray"

parser = argparse.ArgumentParser()
parser.add_argument("-e", "--endpointname", type=str,
required=False, help="Path to the endpoint_id")
args = parser.parse_args()

if (args.endpointname is not None): endpoint_name = args.endpointname


apigateway_id = get_apigateway_id(endpoint_name)
if apigateway_id is not None:
print(apigateway_id)
return 0
else:
return 1

if __name__ == '__main__':
main()

Run the following commands to launch Locust:

$ . ./common-variables.sh
$ apiid="$(python3 get_apigateway_id.py -e ${template})"
$ locust -f ../../test/locust_test_api.py --host=https://${apiid}.execute-api.${region}.amazonaws.com

Alternatively, I also have this locust run commands as a shell script you can run under the test folder serverless-microservice-data-api/bash/apigateway-lambda-dynamodb/run_locus.sh:

#!/bin/sh
. ./common-variables.sh
apiid="$(python3 get_apigateway_id.py -e ${template})"
locust -f ../../test/locust_test_api.py --host=https://${apiid}.execute-api.${region}.amazonaws.com

You should now see Locust start in the Terminal and perform the following steps:

  1. Navigate to http://localhost:8089/ in your web browser to access the Locust web-monitoring and -testing interface.
  2. In the Start New Locust swarm, enter the following:
    • 10 for Number of users to simulate
    • 5 for Hatch rate (users spawned/second)
  3. Leave the tool running on the Statistics tab for a few minutes.

You will get something like this in the Statistics tab:

And on the Charts tab, you should get something similar to the following:

In the Response Times (ms) chart, the orange line represents the 95th percentile, and green is for the median response times.

Here are some observations about the preceding charts:

  • The maximum request time is 2,172 milliseconds or about 2.1 seconds, which is really slow—this is linked to what is known as a cold start, which is the slower way to first launch a Lambda.
  • The number of fails also goes up after about a minute—this is because DynamoDB permits some burst reads before it starts to throttle the read requests. If you log onto the AWS Management Console and look at the DynamoDB table metrics, you will see that this is happening:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset