Scaling

Now that we have made sure that our application will keep running and also will restart from failures, it's time we start looking at ways to handle millions of users flocking to our chat room. To begin with this, the first step is to put up a load-balancer proxy in front of our server. There are lots of options in this, we can use the Apache HTTP server, Nginx, and so on. All these servers work very well with balancing traditional HTTP traffic, but still have some time to catch up to work with WebSockets. So we will use a server that works on load-balancing TCP/IP itself. This is HAProxy (http://haproxy.1wt.eu/). This is how HAProxy is described in its official website:

HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer7 processing. Supporting tens of thousands of connections is clearly realistic with today's hardware.

HAProxy works with frontends and backends. These are configured using the HAProxy configuration file present at /etc/haproxy/haproxy.cfg. The following file creates a frontend listener at port 80 and forwards it to a single server at 3000:

global
  maxconn 4096

defaults
  environment http

frontend all 0.0.0.0:80
  default_backend www_Node.js

backend www_Node.js
  environment http
  option forwardfor
  server Node.js 127.0.0.1:3000 weight 1 maxconn 10000 check

In this file, we are defining a frontend listener at 0.0.0.0:80 with the default www_Node.js backend listening at 3000 on the same 127.0.0.1 server.

But this configuration is not ready to handle WebSockets. To support and handle WebSockets, refer to the following code block:

global
  maxconn 4096

defaults
  environment http

frontend all 0.0.0.0:80
  timeout client 86400000
  default_backend www_Node.js
  acl is_websocket hdr(upgrade) -i websocket
  acl is_websocket hdr_beg(host) -i ws

  use_backend www_
Node.js if is_websocket

backend www_Node.js
  environment http
  option forwardfor
  timeout server 86400000
  timeout connect 4000
  server Node.js 127.0.0.1:3000 weight 1 maxconn 10000 check

The first thing we did was to increase the client timeout value, so the client connection doesn't drop off if there is a long inactivity from the client. The acl lines of code instruct HAProxy to understand and check when we get a websocket request.

By using the use_backend instruction, we configure HAProxy to use the www_Node.js backend to handle the websocket request. This is useful when you want to serve your static pages from any server, such as Apache HTTP, and want to use node exclusively to handle socket.io.

Now we come to the part where we would like the request to be handled by more than one node server/process. To do this, first we will tell the proxy to round robin the requests by adding the following instruction to the backend:

  balance roundrobin

Then we will add more server entries to the backend:

  server Node.js 127.0.0.1:4000 weight 1 maxconn 10000 check
  server Node.js 192.168.1.101:3000 weight 1 maxconn 10000 check

Here we are adding two new node instances: one is a new process listening on port 4000 on the same server, while the other one is running on another server, which is accessible to the load-balancer at 192.168.1.101 on port 3000.

We are done configuring the servers and the incoming requests will now be routed between the three node instances that we have configured.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset