The following are the steps to add appropriate persistence levels:
- Start the Spark shell:
$ spark-shell
- Import the StorageLevel object as enumeration of persistence levels and the implicits associated with it:
scala> import org.apache.spark.storage.StorageLevel._
- Create a dataset:
scala> val words = spark.read.textFile("words")
- Persist the dataset:
scala> words.persist(MEMORY_ONLY_SER)
Though serialization reduces the memory footprint substantially, it adds extra CPU cycles due to deserialization.
By default, Spark uses Java's serialization. Since the Java serialization is slow, the better approach is to use the Kryo library. Kryo is much faster and sometimes even 10 times more compact than the default.