Summary

SparkR overcomes R's single-threaded process issues and memory limitations with Spark's distributed in-memory processing engine. SparkR provides distributed DataFrame based on DataFrame API and Distributed machine learning using MLlib. SparkR automatically inherits Data Sources API and DataFrame optimizations of Spark engines to provide higher scalability. SparkR is really useful for iterative algorithms instead of using R with Hadoop, which is MapReduce-based.

SparkR can be invoked from shells, scripts, RStudio, as well as Zeppelin notebooks. It can be used with local, standalone, and YARN resource managers. Using the Data Sources API, any external data can be imported to SparkR without any additional coding.

Big Data analytics with Spark and Hadoop is becoming extremely popular and organizations are reaping the benefits of higher scalability and performance with ease of use. While there are plenty of tools available on both Spark and Hadoop, one has to pick the right tool suitable for the use case for optimum performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset