Creating and launching an ElasticSearch cluster

As mentioned, AWS has a managed service for ElasticSearch. We will use it to create our cluster.

Create a new file and call it elasticsearch-cf-template.py. Our script will start almost like the nodeserver-cf-template.py file, but with a number of imports, including some for the elasticsearch service:

"""Generating CloudFormation template.""" 
 
from ipaddress import ip_network 
 
from ipify import get_ip 
 
from troposphere import ( 
    GetAtt, 
    Join, 
    Output, 
    Export, 
    Parameter, 
    Ref, 
    Template, 
) 
 
from troposphere.elasticsearch import ( 
    Domain, 
    EBSOptions, 
    ElasticsearchClusterConfig, 
) 

We will continue the script with the creation of the template and the extraction of the IP address. In the context of ElasticSearch, limiting who can access your cluster is very important, as there is no other authentication mechanism in place:

t = Template() 
 
PublicCidrIp = str(ip_network(get_ip())) 

We will now provide a brief description and collect the different parameters. The first parameter to select is the instance size. We will provide a few options here, but you can refer to http://amzn.to/2s32Vvb for the full list of available instance types:

t.add_description('Effective DevOps in AWS: Elasticsearch') 
 
t.add_parameter(Parameter( 
    "InstanceType", 
    Type="String", 
    Description="instance type", 
    Default="t2.small.elasticsearch", 
    AllowedValues=[ 
        "t2.small.elasticsearch", 
        "t2.medium.elasticsearch", 
        "m4.large.elasticsearch", 
    ], 
)) 

We will also provide the ability to set the number of instances present in our cluster. In the context of the book, we are assuming that the cluster will store just a few GB of logs. For bigger clusters, you may consider altering the template to also provide the ability to have dedicated master instances:

t.add_parameter(Parameter( 
    "InstanceCount", 
    Default="2", 
    Type="String", 
    Description="Number instances in the cluster", 
)) 

The t2 and m4 instances don't come with any attached storage. We will use EBS volumes to store our logs. This next option will let us set the size of the EBS volumes:

t.add_parameter(Parameter( 
    "VolumeSize", 
    Default="10", 
    Type="String", 
    Description="Size in Gib of the EBS volumes", 
)) 

The different parameters we wish to configure are now all present. We can proceed with the creation of our ElasticSearch cluster. ElasticSearch clusters are called domains. We will create a Domain resource and give it a name as follows:

t.add_resource(Domain( 
    'ElasticsearchCluster', 
    DomainName="logs", 

We then configure which version of ElasticSearch to use. We will pick version 5.3, which is the most recent version of ElasticSearch released when this book was published:

    ElasticsearchVersion="5.3", 

Next, we will configure our cluster. As mentioned earlier, we are assuming that the cluster will stay fairly small, and, therefore, we won't need dedicated master instances. For the same reason, we will also opt out of the zone awareness feature, which creates node replicas on the different AZ of the region the cluster is created in. Finally, we will reference the desired instance count and instance type from the parameters of the template:

    ElasticsearchClusterConfig=ElasticsearchClusterConfig( 
        DedicatedMasterEnabled=False, 
        InstanceCount=Ref("InstanceCount"), 
        ZoneAwarenessEnabled=False, 
        InstanceType=Ref("InstanceType"), 
    ), 

We will also want to specify a few advanced options as follows:

    AdvancedOptions={ 
        "indices.fielddata.cache.size": "", 
        "rest.action.multi.allow_explicit_index": "true", 
    },

After configuring the cluster, we will configure the EBS volume for our instances. Here, too, we will reference our parameters to get the volume size of our volumes:

    EBSOptions=EBSOptions(EBSEnabled=True,
Iops=0,
VolumeSize=Ref("VolumeSize"),
VolumeType="gp2"),

We will conclude the creation of our domain with the configuration of the access policy:

    AccessPolicies={ 
        'Version': '2012-10-17', 
        'Statement': [ 
            { 
                'Effect': 'Allow', 
                'Principal': { 
                    'AWS': [Ref('AWS::AccountId')] 
                }, 
                'Action': 'es:*', 
                'Resource': '*', 
            }, 
            { 
                'Effect': 'Allow', 
                'Principal': { 
                    'AWS': "*" 
                }, 
                'Action': 'es:*', 
                'Resource': '*', 
                'Condition': { 
                    'IpAddress': { 
                        'aws:SourceIp': PublicCidrIp 
                    } 
                } 
 
            } 
        ] 
    }, 
)) 

Finally, we will conclude the creation of our template with two outputs and our final print statement. The output will be the Kibana URL and the domainArn of our ElasticSearch domain, which we will use in the next section. To do so, we are going to export it under the name LogsDomainArn:

t.add_output(Output( 
    "DomainArn", 
    Description="Domain Arn", 
    Value=GetAtt("ElasticsearchCluster", "DomainArn"), 
    Export=Export("LogsDomainArn"), 
)) 
 
t.add_output(Output( 
    "Kibana", 
    Description="Kibana url", 
    Value=Join("", [ 
        "https://", 
        GetAtt("ElasticsearchCluster", "DomainEndpoint"), 
        "/_plugin/kibana/" 
    ]) 
)) 
 
print t.to_json()  

Our template is now completed. Your script should be similar to http://bit.ly/2v3DHRG.

We can commit it and create our ElasticSearch domain:

$ python elasticsearch-cf-template.py > elasticsearch-cf.template
$ git add elasticsearch-cf-template.py
$ git commit -m "Adding ElasticSearch template"
$ git push
$ aws cloudformation create-stack
--stack-name elasticsearch
--template-body file://elasticsearch-cf.template
--parameters
ParameterKey=InstanceType,ParameterValue=t2.small.elasticsearch
ParameterKey=InstanceCount,ParameterValue=2
ParameterKey=VolumeSize,ParameterValue=10
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset