How to do it...

  1. Spark comes bundled with scripts to launch the Spark cluster on Amazon EC2. Let's launch the cluster using the following command:
        $ cd /home/hduser
$ spark-ec2 -k <key-pair> -i <key-file> -s <num-slaves> launch <cluster-name>
<key-pair> - name of EC2 keypair created in AWS
<key-file> the private key file you downloaded
<num-slaves> number of slave nodes to launch
<cluster-name> name of the cluster
  1. Launch the cluster with the example value:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
version 2 -s 3 launch spark-cluster
  1. Sometimes, the default availability zones are not available; in that case, retry sending the request by specifying the specific availability zone you are requesting:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -z us-east-1b --
hadoop-major-version 2 -s 3 launch spark-cluster
  1. If your application needs to retain data after the instance shuts down, attach EBS volume to it (for example, 10 GB space):
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem --hadoop-major-
version 2 -ebs-vol-size 10 -s 3 launch spark-cluster
  1. If you use Amazon's spot instances, here is the way to do it:
        $ spark-ec2 -k kp-spark -i /home/hduser/keypairs/kp-spark.pem -spot-price=0.15 
--hadoop-major-version 2 -s 3 launch spark-cluster
Spot instances allow you to name your own price for Amazon EC2's computing capacity. You simply bid for spare Amazon EC2 instances and run them whenever your bid exceeds the current spot price, which varies in real time and is based on supply and demand (source: www.amazon.com).
  1. After completing the preceding launch process, check the status of the cluster by going to the webUI URL that will be printed at the end:

  1. Check the status of the cluster:
  1. Now, to access the Spark cluster on EC2, connect to the master node using secure shell protocol (SSH):
         $ spark-ec2 -k spark-kp1 -i /home/hduser/kp/spark-kp1.pem  login spark-cluster
  1. The following image illustrates the result you'll get:
  1. Check the directories in the master node and see what they do:
Directory Description
ephemeral-hdfs This is the Hadoop instance for which data is ephemeral and gets deleted when you stop or restart the machine.
persistent-hdfs Each node has a very small amount of persistent storage (approximately 3 GB). If you use this instance, data will be retained in that space.
hadoop-native This refers to the native libraries that support Hadoop, such as snappy compression libraries.
scala This refers to the Scala installation.
shark This refers to the Shark installation (Shark is no longer supported and is replaced by Spark SQL).
spark This refers to the Spark installation.
spark-ec2 This refers to the files that support this cluster deployment.
tachyon This refers to the Tachyon installation.
  1. Check the HDFS version in an ephemeral instance:
        $ ephemeral-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Check the HDFS version in a persistent instance with the following command:
        $ persistent-hdfs/bin/hadoop version
Hadoop 2.0.0-chd4.2.0
  1. Change the configuration level of the logs:
        $ cd spark/conf
  1. The default log level information is too verbose, so let's change it to Error:
  •  Create the log4.properties file by renaming the template:
        $ mv log4j.properties.template log4j.properties
  • Open log4j.properties in vi or your favorite editor:
        $ vi log4j.properties
  • Change the second line from | log4j.rootCategory=INFO, console to | log4j.rootCategory=ERROR, console.
  1. Copy the configuration to all the slave nodes after the change:
        $ spark-ec2/copydir spark/conf
  1. You should get something like this:
  1. Destroy the Spark cluster:
        $ spark-ec2 destroy spark-cluster
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset