Step 2 - Reading and parsing the dataset

Let's use Spark's textFile method to read a text file from your preferred storage such as HDFS or the local filesystem. However, it's up to us to specify how to split the fields. While reading the input dataset, we do groupBy first and transform after the join with a flatMap operation to get the required fields:

val TRAIN_FILENAME = "data/ua.base" 
val TEST_FIELNAME = "data/ua.test" 
val MOVIES_FILENAME = "data/u.item" 
 
  // get movie names keyed on id 
val movies = spark.sparkContext.textFile(MOVIES_FILENAME) 
    .map(line => { 
      val fields = line.split("|") 
      (fields(0).toInt, fields(1)) 
    }) 
val movieNames = movies.collectAsMap() 
  // extract (userid, movieid, rating) from ratings data 
val ratings = spark.sparkContext.textFile(TRAIN_FILENAME) 
    .map(line => { 
      val fields = line.split("t") 
      (fields(0).toInt, fields(1).toInt, fields(2).toInt) 
    }) 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset