Dockerfile

Nothing special here: the file is pretty much a trimmed-down version of our older Node.js app:

FROM node:8

# Make sure we are fully up to date
RUN apt-get update -q &&
apt-get dist-upgrade -y &&
apt-get clean &&
apt-get autoclean

# Container port that should get exposed
EXPOSE 8000

ENV SRV_PATH /usr/local/share/queue_handler

# Make our directory
RUN mkdir -p $SRV_PATH &&
chown node:node $SRV_PATH

WORKDIR $SRV_PATH

USER node

COPY . $SRV_PATH/

RUN npm install

CMD ["npm", "start"]

We will build the image now:

$ docker build -t queue-worker .
Sending build context to Docker daemon 7.168kB
<snip>
---> 08e33a32ba60
Removing intermediate container e17c836c5a33
Successfully built 08e33a32ba60
Successfully tagged queue-worker:latest

With the image building out of the way, we can now write out our stack definition file: swarm_application.yml. We are pretty much creating the queue server, the queue listener, and the queue sender on a single network and making sure that they can find each other here:

version: "3"
services:
queue-sender:
image: queue-worker
command: ["npm", "start", "sender"]
networks:
- queue_network
deploy:
replicas: 1
depends_on:
- redis-server
environment:
- QUEUE_HOST=redis-server

queue-receiver:
image: queue-worker
command: ["npm", "start", "receiver"]
networks:
- queue_network
deploy:
replicas: 1
depends_on:
- redis-server
environment:
- QUEUE_HOST=redis-server

redis-server:
image: redis
networks:
- queue_network
deploy:
replicas: 1
networks:
- queue_network
ports:
- 6379:6379

networks:
queue_network:

Having both image built and the stack definition, we can launch our queue cluster to see whether it works:

$ # We need a Swarm first
$ docker swarm init
Swarm initialized: current node (c0tq34hm6u3ypam9cjr1vkefe) is now a manager.
<snip>

$ # Now we deploy our stack and name it "queue_stack"
$ docker stack deploy
-c swarm_application.yml
queue_stack

Creating service queue_stack_queue-sender
Creating service queue_stack_queue-receiver
Creating service queue_stack_redis-server

$ # At this point, we should be seeing some traffic...
$ docker service logs queue_stack_queue-receiver
<snip>
queue_stack_queue-receiver.1.ozk2uxqnbfqz@machine | Starting...
queue_stack_queue-receiver.1.ozk2uxqnbfqz@machine | Registering listener...
queue_stack_queue-receiver.1.ozk2uxqnbfqz@machine | Got a message from the queue with data: { key: '2017-10-02T08:24:21.391Z' }
queue_stack_queue-receiver.1.ozk2uxqnbfqz@machine | Got a message from the queue with data: { key: '2017-10-02T08:24:22.898Z' }
<snip>

$ # Yay! It's working!

$ # Let's clean things up to finish up
$ docker stack rm queue_stack
Removing service queue_stack_queue-receiver
Removing service queue_stack_queue-sender
Removing service queue_stack_redis-server
Removing network queue_stack_redis-server
Removing network queue_stack_queue_network
Removing network queue_stack_service_network

$ docker swarm leave --force
Node left the swarm.

At this point, we could add any number of senders and listeners (within reason) and our system will work just fine in a very asynchronous style, increasing throughput at both ends. As a reminder, though, if you decide to go this route, another queue type is highly advised (Kafka, SQS, and so on) but the underlying principles are pretty much the same.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset