Logging

When it comes to logging, one popular solution is to set up an Elasticsearch stack. The natural combination could be Elasticsearch-Logstash-Kibana (ELK).

We use an ELK stack from https://github.com/deviantony/docker-elk with modification to improve it by adding Docker Swarm configs, and to deploy each of them independently. The original Docker Compose file, docker-compose.yml, are split into three YML files, each for Elasticsearch, Kibana, and Logstash, respectivelyServices must be deployed this way because we do not want to bring the whole logging system down when we change each service's configs. The fork used in this chapter is available at https://github.com/chanwit/docker-elk.

The following figure shows what the stack will look like. All ELK components will be in elk_net. The Logstash instance will be exposed on port 5000. On each Docker host, its local Logspout agent will forward log messages from the Docker host to the Logstash instance. Logstash will then transform each message and store them in ElasticSearch. Finally, a user can access Kibana via port 5601 to visualize all the logs:

Figure 7.14: An ELK stack block diagram for cluster-wide logging

We start with the preparation of a dedicated network for our ELK stack. We name this network elk_net and use it for all ELK components:

docker network create 
--driver weaveworks/net-plugin:2.1.3
--subnet 10.32.200.0/24
--attachable
elk_net

The following is the source of elasticsearch.yml. We use Docker compose YAML specification version 3.3 throughout the chapter. This is the minimum requirement, as we will use Docker Swarm configs to manage all configuration files for us:

version: '3.3'

configs:
elasticsearch_config:
file: ./elasticsearch/config/elasticsearch.yml

services:
elasticsearch:
build:
context: elasticsearch/
image: chanwit/elasticsearch:6.1
configs:
- source: elasticsearch_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx512m -Xms512m"

networks:
default:
external:
name: elk_net

It is the requirement that docker stack needs the image name to be specified before it can be deployed. So, we need to build the container image using docker-compose first.

We use docker-compose only for building images.

Let's do it! We use docker-compose build to prepare images defined in the YML file. The docker-compose command also tags images for us too. As we have a separate YML file each service, we use -f to tell docker-compose to build the correct file:

$ docker-compose -f elasticsearch.yml build

When the image is ready, we can simply deploy the stack, es, using the following command:

$ docker stack deploy -c elasticsearch.yml es

Next, we move to the preparation and deployment of Kibana.

Here's the stack YML file for Kibana. We have kibana_config pointing to our Kibana configuration. The Kibana port 5601 is published using Swarm's host mode to bypass the ingress layer. Please remember that we do not really have the default ingress layer in our cluster. As previously mentioned, we use Træfik as our new ingress:

version: '3.3'

configs:
kibana_config:
file: ./kibana/config/kibana.yml

services:
kibana:
build:
context: kibana/
image: chanwit/kibana:6.1
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
ports:
- published: 5601
target: 5601
mode: host

networks:
default:
external:
name: elk_net

Similar to Elasticsearch, now the Kibana image can be prepared using the docker-compose build command:

$ docker-compose -f kibana.yml build

After that, we deploy Kibana with the stack name kb:

$ docker stack deploy -c kibana.yml kb

With Logstash, there are two configuration files to consider. The most important one is the pipeline config, logstash_pipeline_config. We need to add custom rules to this file for log message transformation. It keeps changing, unlike the first two components of ELK. Logstash listens to port 5000, both for TCP and UDP, inside elk_net. We will later plug Logspout into this network to convey log messages from Docker daemons to this Logstash service:

version: '3.3'

configs:
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline_config:
file: ./logstash/pipeline/logstash.conf

services:
logstash:
build:
context: logstash/
image: chanwit/logstash:6.1
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml
- source: logstash_pipeline_config
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"

networks:
default:
external:
name: elk_net

The next steps are to build and deploy, similar to the first two components:

$ docker-compose -f logstash.yml build
$ docker stack deploy -c logstash.yml log

We started these three components as separate stacks linked together via elk_net. To check if all components are running, simply check this using docker stack ls:

$ docker stack ls
NAME SERVICES
es 1
kb 1
log 1

Finally, we can redirect all logs from each Docker daemon to the ELK stack, the central service, using Logspout. This can be done by attaching each local logspout container to the elk_net so that they will all be able to connect to a Logstash instance inside the network. We start each Logspout using the following command:

$ docker run -d 
--name=logspout
--network=elk_net
--volume=/var/run/docker.sock:/var/run/docker.sock
gliderlabs/logspout
syslog+tcp+udp://logstash:5000

We are now able to log all messages via Logspout to Logstash, storing them in Elasticsearch, and visualizing them with Kibana.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset