Deploying to swarm

We have two clean nodes in our swarm. We need to go back a little bit and prepare our hosts for our service. More specifically, we need to create our image again.

Because we have our Dockerfile, this is an easy task. Just run:

docker build -t imagini:0.0.5 .

Do this on both hosts:

Finally, we'll have our image available on both nodes. Our database container also needs an image, but since that's an official, published image, we don't need to build it, as Docker will download it when it needs it.

We can now use the Docker stack tool to deploy our instances. It uses our previous Docker Compose configuration to know how to deploy our service. But before we do this, we need to make a couple of adjustments to the configuration.

For now, we'll enforce our database to only have one replica, as we're not yet prepared to distribute our database server just yet. We can do that by having this section on the configuration:

deploy:
replicas: 1
placement:
constraints: [node.role == manager]

It indicates that we only want one replica and that the container should be placed (should run) in the manager node.

Another change we need to do is change the database volume because our containers are running on a virtual machine that is no longer local. Let's change the volume section to this:

volumes:
- /var/lib/mysql:/var/lib/mysql

We need to create the folder on the manager machine. To do that, run:

docker-machine ssh manager 'mkdir /var/lib/mysql'

To sum up, you should have a configuration as follows:

version: "3"
networks:
imagini:
services:
database:
image: mysql:5.7
networks:
- imagini
volumes:
- /var/lib/mysql:/var/lib/mysql
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: imagini
MYSQL_ROOT_PASSWORD: secret
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
service:
image: imagini:0.0.5
networks:
- imagini
volumes:
- ${PWD}/settings.json:/opt/app/settings.json
ports:
- "80:3000"

Head to the first tab, the one that controls the manager node, and run the following:

docker stack deploy --compose-file docker-compose.yml imagini

This should start the deployment. It will create the network on the swarm and then deploy the two services:

Wait a little bit and then run the following command:

docker stack ps imagini

This will do the same as docker ps, but just for our stack:

The only difference you may notice is that the name of the containers have a .1 on the end of them and that a new NODE column indicates where it's running inside our swarm.

Because our containers are running in a swarm inside two virtual machines, we need to use the IP address of the node that has the service running on it. Looking at the previous screenshot, we can see that it's in the replica machine:

Its address is 192.168.99.101. Head to the browser and see if our service is available or not:

Great! It's working. But, if you think about it, if this is a swarm, shouldn't it be available from anywhere in the network? You're right, yes, it should be available. Let's check the other node address:

Now, this is something awesome. Notice that we only had one instance of each of our two services, and our imagini service was on the replica node. Although it's available from any of the swarm node addresses, if, for some reason, it fails, you wouldn't be able to reach it.

We can change this by scaling the number of instances, or replicas, in our swarm. To do that, just issue the following command to change the scale to two instances:

docker service scale imagini_service=2

Docker will handle the deployment and check if everything goes as planned:

You can check the status of the services at any time using the following command:

docker service ls

Notice how our service 2/2 replicas are running:

If you head to the browser, both addresses still work as expected, but in the background, we have two services running. You can notice this by looking at the uptime property.

If you run a little test, such as refreshing both addresses repeatedly, you'll notice that the uptime changes up and down. This is because we deployed both of the instances at different times, and although the statistics come from the database server, which is the same, the uptime is the process uptime:

Notice how the swarm is not constant and does not give you the same instance for the same address. It keeps rotating it.

The swarm also monitors our container instances. For example, let's imagine you're working on a host and, by accident, stop a container:

It's not very critical since we have two replicas running. But if you head to the browser and hit refresh, you may see something odd:

What happened to our uptime? It just went down to a few seconds. This is because the swarm just noticed our container stopping and restarted it. If you look at the Docker containers again, it's still running. Well, actually it restarted:

If you want to scale to more instances, you don't need to have more swarm nodes. There's no limitation on the number of instances of the same containers running on the same node. This is actually a good practice.

It enables you to test if your service is ready to be used in a scaled environment and also allows you to test a phased upgrade by stopping a container and upgrading one by one, while at least one other container is still serving your customers.

Let's just scale our service to five instances:

Notice how we have two of the five services running on this node, and also the database instance. The other three are in the replica node. This is known as swarm balancing the instances.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset