Enabling CloudTrail using CloudFormation

We will create a new script in our EffectiveDevOpsTemplates repository and call it cloudtrail-cf-template.py.

We will start with our script with our usual boilerplate:

"""Generating CloudFormation template.""" 
 
from troposphere import Join 

from troposphere import ( Ref, Template )
from troposphere.s3 import Bucket
from troposphere.s3 import BucketPolicy
from troposphere.cloudtrail import Trail
t = Template() 
 
t.add_description("Effective DevOps in AWS: Turn on CloudTrail and log to S3") 

Out of the box, CloudTrail will retain 7 days' worth  of activity. In order to keep the data for longer, we will configure the service to log any activity to S3. In order to do that, we will first create an S3 bucket. We will use the DeletionPolicy attribute to preserve the logs if/when the stack is deleted:

t.add_resource(Bucket( 
    "S3Bucket", 
    DeletionPolicy="Retain" 
)) 

By default, S3 bucket resources are private and only the resources from the AWS account that the created the bucket can use the bucket. The CloudTrail needs to be able to read and write data to that bucket. In order to allow more than a given AWS account to interact with S3, AWS offers the ability to create S3 access policy resources. We will create one that will provide just enough permission to CloudTrail to read and write to the target location. You can read more about S3 BucketPolicy at http://amzn.to/2l2gd7j.

The policy we are going to create is the translation in Python of the default policy that gets created when you use the GUI to manage the service. You can refer to http://amzn.to/2l2mnUY for more details on that policy.

We will first create the resource and specify which bucket to apply the policy to:

t.add_resource(BucketPolicy( 
    "BucketPolicy", 
    Bucket=Ref("S3Bucket"), 

We will now create the policy. There will be two statements. We will first grant CloudTrail the ability to read the access control list of the bucket (http://amzn.to/2l2p3Sd). The second statement will allow CloudTrail to write specifically to the location AWSLogs/myAccountID:

    PolicyDocument={ 
        "Statement": [{ 
            "Action": "s3:GetBucketAcl", 
            "Effect": "Allow", 
            "Principal": { 
                "Service": "cloudtrail.amazonaws.com" 
            }, 
            "Resource": Join("", [ 
                "arn:aws:s3:::", Ref("S3Bucket") 
            ]) 
        }, { 
            "Action": "s3:PutObject", 
            "Effect": "Allow", 
            "Principal": { 
                "Service": "cloudtrail.amazonaws.com" 
            }, 
            "Resource": Join("", [ 
                "arn:aws:s3:::", 
                Ref("S3Bucket"), 
                "/AWSLogs/", 
                Ref("AWS::AccountId"), 
                "/*" 
            ]), 
            "Condition": { 
                "StringEquals": { 
                    "s3:x-amz-acl": "bucket-owner-full-control" 
                } 
            } 
        }] 
    }, 
)) 

Next, we will create the Trail resources. We will call it Trail. The first thing we will do is reference the bucket we previously created:

t.add_resource(Trail( 
    "Trail", 
    S3BucketName=Ref("S3Bucket"), 

We will now enable the logging of AWS API calls by setting the IsLogging attribute like so:

    IsLogging=True, 

We will ask CloudTrail to validate the integrity of log files with the following option. With that option enabled, CloudTrail will produce digest files in addition to our trails which allow us to make sure that the files aren't tempered. We will see how to validate those logs after creating our trail:

    EnableLogFileValidation=True, 

Depending on the level of information you would like to collect, you can turn on the IncludeGlobalServiceEvents flag to include the logging of AWS global services such as IAM:

    IncludeGlobalServiceEvents=True, 

We will want our trail to be multi-region, meaning that it will log API calls from any AWS region. Our infrastructure might only be located in one region but it is important to collect data from every possible region to have as much data as possible and be able to monitor unusual activity in the regions not used by your service for example. To turn on the multi-region setting, you will need to use the following attribute:

    IsMultiRegionTrail=True, 

Lastly, we will want to include a DependsOn setting to make sure that the S3 policy is created by the time that CloudFormation tries to create the CloudTrail resource:

    DependsOn=["BucketPolicy"] 
)) 

Our Trail resource is now complete. As always, we will end our script with a call to print:

print(t.to_json()) 

We can now save, commit, and execute the script as follows:

$ git add cloudtrail-cf-template.py
$ git commit -m "Adding template to configure CloudTrail"
$ git push
$ python cloudtrail-cf-template.py > cloudtrail-cf.template
$ aws cloudformation create-stack --stack-name cloudtrail --template-body file://cloudtrail-cf.template { "StackId": "arn:aws:cloudformation:us-east-1:511912822958:stack/cloudtrail/612c42a0-f80e-11e6-8d9b-50d5ca6326ba" }

From that point on, every call made to any AWS service will be logged.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset