I have an RDD with a huge list of numbers (length of lines from file), I want to know how to get the min/max in single pass over the data.
I know that about Min and Max functions but that would require two passes.
Try this:
>>> from pyspark.statcounter import StatCounter
>>>
>>> rdd = sc.parallelize([9, -1, 0, 99, 0, -10])
>>> stats = rdd.aggregate(StatCounter(), StatCounter.merge, StatCounter.mergeStats)
>>> stats.minValue, stats.maxValue
(-10.0, 99.0)
Here's a working yet inelegant solution using accumulators. The inelegance lies in that you have to define the zero/initial values before hand so they do not interfere with the data:
from pyspark.accumulators import AccumulatorParam
class MinMaxAccumulatorParam(AccumulatorParam):
def zero(self, value):
return value
def addInPlace(self, val1, val2):
return(min(val1[0],val2[0]), max(val1[1],val2[1]))
minmaxAccu = sc.accumulator([500,-500], MinMaxAccumulatorParam())
def g(x):
global minmaxAccu
minmaxAccu += (x,x)
rdd = sc.parallelize([1, 2, 3, 4, 5])
rdd.foreach(g)
In [149]: minmaxAccu.value
Out[149]: (1, 5)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With