Monitoring ALB and ELB instances

ALB and ELB both provide a fair amount of top-level metrics, giving you a sense of how your services are behaving, but, sometimes, metrics aren't enough. You want to produce an access log and track the detail of each request hitting your services. Both ELB and ALB provide the ability to generate an access log and store it on S3. We will illustrate how to turn on this feature by making changes to our CloudFormation templates. We will take the example of our ALB template to turn on logging.

With your editor, open the file helloworld-ecs-alb-cf-template.py, located in the EffectiveDevOpsTemplates repository.

To create the access log, we will need to create an S3 bucket and provide it a special policy so that AWS can access our bucket. This will require including a few extra classes. In the import section, add the following:

from awacs.aws import Allow, Policy, Principal, Statement 
 
from awacs.s3 import PutObject, ARN 
 
from troposphere.s3 import ( 
    Bucket, 
    BucketPolicy, 
) 

Next, we will create our S3 bucket. After the creation of the template variable and the addition of its description, add the following resource:

t.add_resource(Bucket( 
    "S3Bucket", 
    DeletionPolicy="Retain", 
)) 

We are setting a deletion policy so that if we delete the CloudFormation template, the S3 bucket will remain and the logs will still be available. The next resource we are going to create is the special policy for that bucket. The policy will start by referencing the bucket we just created:

t.add_resource(BucketPolicy( 
    'BucketPolicy', 
    Bucket=Ref("S3Bucket"), 

The next part is the creation of the policy. The policy contains a statement that tells the bucket that the AWS account 127311923021 is allowed to put object operations into the /AWSLogs/511912822958/ prefix. The account 127311923021 is a special account that AWS operates. You can refer to http://amzn.to/2r8AqPI for the list of account IDs if your bucket isn't in us-east-1 . In addition, 511912822958 needs to be replaced with your own AWS account ID:

    PolicyDocument=Policy(
Version='2012-10-17',
Statement=[
Statement(
Action=[PutObject],
Effect=Allow,
Principal=Principal("AWS", ["127311923021"]),
Resource=[Join('',
[ARN(''),
Ref("S3Bucket"),
"/AWSLogs/511912822958/*"])],
)
]
)
))

Now that the bucket is created and contains the specific policy, we can turn on the access log in our ALB resource as follows:

t.add_resource(elb.LoadBalancer( 
    "LoadBalancer", 
    Scheme="internet-facing", 
    Subnets=Split( 
        ',', 
        ImportValue( 
            Join("-", 
                 [Select(0, Split("-", Ref("AWS::StackName"))), 
                  "cluster-public-subnets"] 
                 ) 
        ) 
    ), 
    SecurityGroups=[Ref("LoadBalancerSecurityGroup")], 
    LoadBalancerAttributes=[
elb.LoadBalancerAttributes(
Key="access_logs.s3.enabled",
Value="true",
),
elb.LoadBalancerAttributes(
Key="access_logs.s3.bucket",
Value=Ref("S3Bucket"),
)
], ))

Once those changes are in place, you can save and commit your changes, generate the new template, and update your stack. The code should be similar to http://bit.ly/2v3bUAA:

$ git add helloworld-ecs-alb-cf-template.py 
$ git commit -m "Sending ALB logs to S3"
$ git push
$ python helloworld-ecs-alb-cf-template.py > helloworld-ecs-alb-cf.template
$ aws cloudformation update-stack
--stack-name staging-alb
--template-body file://helloworld-ecs-alb-cf.template
$ aws cloudformation update-stack
--stack-name production-alb
--template-body file://helloworld-ecs-alb-cf.template

Logs will now be automatically uploaded every five minutes to the S3 bucket:

Using AWS Athena to efficiently retrieve logs
Once your logs are in S3, you can either download them and analyze them locally, or, if you are looking for specific information, you can use AWS Athena (http://amzn.to/2rSsrn7) to run SQL queries against your logs. For example, to get the list of the most active IP addresses, you can run the following query:
SELECT client_ip, COUNT(*) as count
FROM logs.alb
GROUP BY client_ip
ORDER BY COUNT(*) DESC LIMIT 100;

As you would expect, each service AWS releases comes with documentation that covers every aspect of its monitoring. You can refer to it to see what you need to expose and implement it with code, using one of the strategies we showed in this section.

The last part of adding a monitoring solution is to create alarms to automatically notify engineers when something abnormal is happening. We will use CloudWatch in conjunction with SNS to create those alarms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset