Spark Dataset API

After this discussion about Spark DataFrames, let's have a quick recap of the Spark Dataset API. Introduced in Apache Spark 1.6, the goal of Spark Datasets was to provide an API that allows users to easily express transformations on domain objects, while also providing the performance and benefits of the robust Spark SQL execution engine. As part of the Spark 2.0 release (and as noted in the diagram below), the DataFrame APIs is merged into the Dataset API thus unifying data processing capabilities across all libraries. Because of this unification, developers now have fewer concepts to learn or remember, and work with a single high-level and type-safe API – called Dataset:

Spark Dataset API

Conceptually, the Spark DataFrame is an alias for a collection of generic objects Dataset[Row], where a Row is a generic untyped JVM object. Dataset, by contrast, is a collection of strongly-typed JVM objects, dictated by a case class you define, in Scala or Java. This last point is particularly important as this means that the Dataset API is not supported by PySpark due to the lack of benefit from the type enhancements. Note, for the parts of the Dataset API that are not available in PySpark, they can be accessed by converting to an RDD or by using UDFs. For more information, please refer to the jira [SPARK-13233]: Python Dataset at http://bit.ly/2dbfoFT.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset