Why does the rdd.sample()
function on Spark RDD return a different number of elements even though the fraction parameter is the same? For example, if my code is like below:
val a = sc.parallelize(1 to 10000, 3) a.sample(false, 0.1).count
Every time I run the second line of the code it returns a different number not equal to 1000. Actually I expect to see 1000 every time although the 1000 elements might be different. Can anyone tell me how I can get a sample with the sample size exactly equal to 1000? Thank you very much.
1.4 Stratified sampling in PySpark You can get Stratified sampling in PySpark without replacement by using sampleBy() method. It returns a sampling fraction for each stratum. If a stratum is not specified, it takes zero as the default. fractions – It's Dictionary type takes key and value.
RANDOM DIGIT DIAL SAMPLE Cellular RDD Sample consists of randomly generated U.S. cellular telephone numbers within all thousand-series blocks (first seven digits of a phone number) dedicated to cellular service.
If you want an exact sample, try doing
a.takeSample(false, 1000)
But note that this returns an Array and not an RDD
.
As for why the a.sample(false, 0.1)
doesn't return the same sample size: it's because spark internally uses something called Bernoulli sampling for taking the sample. The fraction
argument doesn't represent the fraction of the actual size of the RDD. It represent the probability of each element in the population getting selected for the sample, and as wikipedia says:
Because each element of the population is considered separately for the sample, the sample size is not fixed but rather follows a binomial distribution.
And that essentially means that the number doesn't remain fixed.
If you set the first argument to true
, then it will use something called Poisson sampling, which also results in a non-deterministic resultant sample size.
Update
If you want stick with the sample
method, you can probably specify a larger probability for the fraction
param and then call take
as in:
a.sample(false, 0.2).take(1000)
This should, most of the time, but not necessarily always, result in the sample size of 1000. This could work if you have a large enough population.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With