How to do it...

The following are the steps to add appropriate persistence levels:

  1. Start the Spark shell:
        $ spark-shell
  1. Import the StorageLevel object as enumeration of persistence levels and the implicits associated with it:
        scala> import org.apache.spark.storage.StorageLevel._
  1. Create a dataset:
        scala> val words = spark.read.textFile("words")
  1. Persist the dataset:
        scala> words.persist(MEMORY_ONLY_SER)

Though serialization reduces the memory footprint substantially, it adds extra CPU cycles due to deserialization.

By default, Spark uses Java's serialization. Since the Java serialization is slow, the better approach is to use the Kryo library. Kryo is much faster and sometimes even 10 times more compact than the default.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset