Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Apache Spark: multiple outputs in one map task

TL;DR: I have a large file that I iterate over three times to get three different sets of counts out. Is there a way to get three maps out in one pass over the data?

Some more detail:

I'm trying to compute PMI between words and features that are listed in a large file. My pipeline looks something like this:

val wordFeatureCounts = sc.textFile(inputFile).flatMap(line => {
  val word = getWordFromLine(line)
  val features = getFeaturesFromLine(line)
  for (feature <- features) yield ((word, feature), 1)
})

And then I repeat this to get word counts and feature counts separately:

val wordCounts = sc.textFile(inputFile).flatMap(line => {
  val word = getWordFromLine(line)
  val features = getFeaturesFromLine(line)
  for (feature <- features) yield (word, 1)
})

val featureCounts = sc.textFile(inputFile).flatMap(line => {
  val word = getWordFromLine(line)
  val features = getFeaturesFromLine(line)
  for (feature <- features) yield (feature, 1)
})

(I realize I could just iterate over wordFeatureCounts to get the wordCounts and featureCounts, but that doesn't answer my question, and looking at running times in practice I'm not sure it's actually faster to do it that way. Also note that there are some reduceByKey operations and other stuff that I do with this after the counts are computed that aren't shown, as they aren't relevant to the question.)

What I would really like to do is something like this:

val (wordFeatureCounts, wordCounts, featureCounts) = sc.textFile(inputFile).flatMap(line => {
  val word = getWordFromLine(line)
  val features = getFeaturesFromLine(line)
  val wfCounts = for (feature <- features) yield ((word, feature), 1)
  val wCounts = for (feature <- features) yield (word, 1)
  val fCounts = for (feature <- features) yield (feature, 1)
  ??.setOutput1(wfCounts)
  ??.setOutput2(wCounts)
  ??.setOutput3(fCounts)
})

Is there any way to do this with spark? In looking for how to do this, I've seen questions about multiple outputs when you're saving the results to disk (not helpful), and I've seen a bit about accumulators (which don't look like what I need), but that's it.

Also note that I can't just yield all of these results in one big list, because I need three separate maps out. If there's an efficient way to split a combined RDD after the fact, that could work, but the only way I can think of to do this would end up iterating over the data four times, instead of the three I currently do (once to create the combined map, then three times to filter it into the maps I actually want).

like image 883
mattg Avatar asked Dec 14 '25 00:12

mattg


1 Answers

It is not possible to split an RDD into multiple RDDs. This is understandable if you think about how this would work under the hood. Say you split RDD x = sc.textFile("x") into a = x.filter(_.head == 'A') and b = x.filter(_.head == 'B'). Nothing happens so far, because RDDs are lazy. But now you print a.count. So Spark opens the file, and iterates through the lines. If the line starts with A it counts it. But what do we do with lines starting with B? Will there be a call to b.count in the future? Or maybe it will be b.saveAsTextFile("b") and we should be writing these lines out somewhere? We cannot know at this point. Splitting an RDD is just not possible with the Spark API.

But nothing stops you from implementing something if you know what you want. If you want to get both a.count and b.count you can map lines starting with A into (1, 0) and lines with B into (0, 1) and then sum up the tuples elementwise in a reduce. If you want to save lines with B into a file while counting lines with A, you could use an aggregator in a map before filter(_.head == 'B').saveAsTextFile.

The only generic solution is to store the intermediate data somewhere. One option is to just cache the input (x.cache). Another is to write the contents into separate directories in a single pass, then read them back as separate RDDs. (See Write to multiple outputs by key Spark - one Spark job.) We do this in production and it works great.

like image 63
Daniel Darabos Avatar answered Dec 16 '25 15:12

Daniel Darabos