Hosting the visualization in Amazon S3

As an alternative to publishing our pages using GitHub Pages, we can also serve static sites using Amazon S2. Amazon S3 is a data storage service provided by Amazon. It can be used to store files of any kind, and in particular, to store and serve static websites. Amazon S3 stores data with 99.99 percent availability and scales well in terms of storage capacity, number of requests, and users. It also provides fine-grained control access, allowing you to store sensible data.

The pricing depends on how much data you store and how many access requests are made, but it starts at less than 0.1 USD per GB per month, and there is a free tier available to use the platform at no charge to store up to 5 GB (and 20,000 requests per month). In this section, we will assume that you have an Amazon Web Services account. If you don't, you can sign up at http://aws.amazon.com/ and create an account.

In Amazon S3, the files are stored in buckets. A bucket is a container for objects. Buckets are stored in one of several regions; the region is usually chosen to optimize the latency or to minimize costs. The name of the bucket needs to be unique among the buckets in Amazon.

To host our site, we will create a bucket. To create the bucket, we need to go to the Amazon S3 console at https://console.aws.amazon.com and select Create Bucket. Here, we need to name our bucket and assign it a region. We will name it hdi-explorer and select the default region. When the bucket is created, we select the bucket and go to Properties. In the Static Website Hosting section, we can enable the hosting and retrieve the URL of our bucket. We will use this URL as the base URL of the site.

Configuring Jekyll to deploy files to S3

To deploy static content to Amazon S3, we need to generate a version of the site with the baseurl variable set to the Amazon S3 endpoint. We will create an alternate Jekyll configuration file and use it to generate the S3 version of the site. In this case, we only need to update the base URL of the site, but this new file can have different configuration values. We will create the _s3.yml file with the following options:

# Jekyll Configuration
safe: true
markdown: rdiscount
permalink: pretty
destination: _s3
exclude:
    - bower_components
    - node_modules
    - Gruntfile.js
    - bower.json
    - package.json
    - README.md
# Site
name: Human Development Index Explorer
baseurl: http://hdi-explorer.s3-website-us-east-1.amazonaws.com

We set the destination folder to _s3. This will generate the files that we need to deploy to Amazon in the _s3 directory. We have also excluded the files that are not needed to serve the page. We can now use this configuration to build the D3 version of the site:

$ jekyll build --config _s3.yml

We can check whether the links in the generated files point towards the S3 endpoint.

Uploading the site to the S3 bucket

We can upload the files using the web interface in the Amazon AWS console; however, the interface doesn't allow you to upload complete directories. Instead, we will use s3cmd, a command-line tool that helps to upload, download, and sync directories with S3 Buckets. To download and install s3cmd, follow the instructions available on the project website (http://s3tools.org/s3cmd).

Before uploading the files, we need to configure s3cmd to provide the Amazon security credentials. To generate new access keys, go to your account, then go to security credentials, and then select Access Keys. You can generate a new pair of access key ID and secret. These strings allow you to authenticate applications to access your S3 Buckets.

With the access key and secret, we can configure s3cmd to use it to upload our files:

$ s3cmd --configure

This command will request for our access key ID and secret. We can now upload the files. As we just want to upload our files, we can simply use the following command:

$ s3cmd sync _s3/ s3://bucket-name

The first time, this will upload the files to your bucket. Once you have the content in S3, it will keep the bucket synchronized with the _s3 directory and upload only the files that have changed.

Finally, we need to make our files public so that everyone with the URL can access the application. To do this, go to the bucket page in the browser, select all the files, and select Make Public in the Actions menu. The site will now be available at the bucket endpoint URL.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset