Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

spark-scala: Filter RDD if the record of the RDD doesn't exist in another RDD

I have an RDD that has the structure as follows:

((user_id,item_id,rating))

lets call this RDD as training

Then there is another rdd with the same structure:

((user_id,item_id,rating))

and this rdd as test

I want to make sure data that is in test doesn't appear in train per user basis. So lets say

train = {u1,item2: u1,item4 : u1,item3} test={u1,item2:u1, item5}

I want to make sure item2 is removed from u1 training data.

so what I started doing is groupBy both rdd(s) (user_id, item_id)

 val groupedTrainData = trainData.groupBy(x => (x._1, x._2))

But I feel like this is not the way to go.

like image 228
add-semi-colons Avatar asked Oct 20 '22 05:10

add-semi-colons


1 Answers

You need PairRDDFunctions.subtractByKey:

def cleanTrain(
  train: RDD[((user, item), rating)],
  test: RDD[((user, item), rating)]) =
  train.subtractByKey(test)
like image 125
Daniel Darabos Avatar answered Nov 15 '22 07:11

Daniel Darabos