Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache Spark RDD filter into two RDDs

Tags:

I need to split an RDD into 2 parts:

1 part which satisfies a condition; another part which does not. I can do filter twice on the original RDD but it seems inefficient. Is there a way that can do what I'm after? I can't find anything in the API nor in the literature.

like image 766
monster Avatar asked Apr 09 '15 19:04

monster


People also ask

Can RDD be partitioned?

Apache Spark's Resilient Distributed Datasets (RDD) are a collection of various data that are so big in size, that they cannot fit into a single node and should be partitioned across various nodes. Apache Spark automatically partitions RDDs and distributes the partitions across different nodes.

What is the difference between RDDs and paired RDDs?

Unpaired RDDs consists of any type of objects. However, paired RDDs (key-value) attains few special operations in it. Such as, distributed “shuffle” operations, grouping or aggregating the elements the key.


2 Answers

Spark doesn't support this by default. Filtering on the same data twice isn't that bad if you cache it beforehand, and the filtering itself is quick.

If it's really just two different types, you can use a helper method:

implicit class RDDOps[T](rdd: RDD[T]) {
  def partitionBy(f: T => Boolean): (RDD[T], RDD[T]) = {
    val passes = rdd.filter(f)
    val fails = rdd.filter(e => !f(e)) // Spark doesn't have filterNot
    (passes, fails)
  }
}

val (matches, matchesNot) = sc.parallelize(1 to 100).cache().partitionBy(_ % 2 == 0)

But as soon as you have multiple types of data, just assign the filtered to a new val.

like image 96
Marius Soutier Avatar answered Sep 30 '22 13:09

Marius Soutier


Spark RDD does not have such api.

Here is a version based on a pull request for rdd.span that should work:

import scala.reflect.ClassTag
import org.apache.spark.rdd._

def split[T:ClassTag](rdd: RDD[T], p: T => Boolean): (RDD[T], RDD[T]) = {

    val splits = rdd.mapPartitions { iter =>
        val (left, right) = iter.partition(p)
        val iterSeq = Seq(left, right)
        iterSeq.iterator
    }

    val left = splits.mapPartitions { iter => iter.next().toIterator}

    val right = splits.mapPartitions { iter => 
        iter.next()
        iter.next().toIterator
    }
    (left, right)
}

val rdd = sc.parallelize(0 to 10, 2)

val (first, second) = split[Int](rdd, _ % 2 == 0 )

first.collect
// Array[Int] = Array(0, 2, 4, 6, 8, 10)
like image 23
Shyamendra Solanki Avatar answered Sep 30 '22 14:09

Shyamendra Solanki