Scaling Spark

Now we illustrate the most amazing feature of Swarm Mode--the scale command. We restore the configuration we had before trying Flocker, so we destroy the spark-worker service and re-create it with a replica factor of 3:

aws-101$ docker service create 
--constraint 'node.labels.type != sparkmaster' 
--network spark 
--name spark-worker 
--replicas 3 
--env SPARK_MASTER_IP=10.0.0.3 
--env SPARK\_WORKER\_CORES=1 
--env SPARK\_WORKER\_MEMORY=1g 
fsoppelsa/spark-worker

Now, we scale up the service with 30 Spark workers using the following code:

aws-101$ docker service scale spark-worker=30

After some minutes, necessary to eventually pull the image, we check once again:

Scaling Spark

From the Spark web UI:

Scaling Spark

Scale can be used to scale up or down the size of the replicas. So far, still there are no automated mechanisms for auto-scaling or for distributing the load to newly added nodes. But they can be implemented with custom utilities, or we may even expect them to be integrated into Swarm soon day.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset