I'm using spark with java, and i hava an RDD of 5 millions rows. Is there a sollution that allows me to calculate the number of rows of my RDD. I've tried RDD.count()
but it takes a lot of time. I've seen that i can use the function fold
. But i didn't found a java documentation of this function. Could you please show me how to use it or show me another solution to get the number of rows of my RDD.
Here is my code :
JavaPairRDD<String, String> lines = getAllCustomers(sc).cache(); JavaPairRDD<String,String> CFIDNotNull = lines.filter(notNull()).cache(); JavaPairRDD<String, Tuple2<String, String>> join =lines.join(CFIDNotNull).cache(); double count_ctid = (double)join.count(); // i want to get the count of these three RDD double all = (double)lines.count(); double count_cfid = all - CFIDNotNull.count(); System.out.println("********** :"+count_cfid*100/all +"% and now : "+ count_ctid*100/all+"%");
Thank you.
You had the right idea: use rdd. count() to count the number of rows.
For counting the number of distinct rows we are using distinct(). count() function which extracts the number of distinct rows from the Dataframe and storing it in the variable named as 'row'
Using the count () method, we can get the number of rows from the column, and finally, we can use the collect() method to get the count from the column. Where, df is the input PySpark DataFrame. column_name is the column to get the total number of rows (count).
You had the right idea: use rdd.count()
to count the number of rows. There is no faster way.
I think the question you should have asked is why is rdd.count()
so slow?
The answer is that rdd.count()
is an "action" — it is an eager operation, because it has to return an actual number. The RDD operations you've performed before count()
were "transformations" — they transformed an RDD into another lazily. In effect the transformations were not actually performed, just queued up. When you call count()
, you force all the previous lazy operations to be performed. The input files need to be loaded now, map()
s and filter()
s executed, shuffles performed, etc, until finally we have the data and can say how many rows it has.
Note that if you call count()
twice, all this will happen twice. After the count is returned, all the data is discarded! If you want to avoid this, call cache()
on the RDD. Then the second call to count()
will be fast and also derived RDDs will be faster to calculate. However, in this case the RDD will have to be stored in memory (or disk).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With