Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does Spark support Partition Pruning with Parquet Files

I am working with a large dataset, that is partitioned by two columns - plant_name and tag_id. The second partition - tag_id has 200000 unique values, and I mostly access the data by specific tag_id values. If I use the following Spark commands:

sqlContext.setConf("spark.sql.hive.metastorePartitionPruning", "true")
sqlContext.setConf("spark.sql.parquet.filterPushdown", "true")
val df = sqlContext.sql("select * from tag_data where plant_name='PLANT01' and tag_id='1000'")

I would expect a fast response as this resolves to a single partition. In Hive and Presto this takes seconds, however in Spark it runs for hours.

The actual data is held in a S3 bucket, and when I submit the sql query, Spark goes off and first gets all the partitions from the Hive metastore (200000 of them), and then calls refresh() to force a full status list of all these files in the S3 object store (actually calling listLeafFilesInParallel).

It is these two operations that are so expensive, are there any settings that can get Spark to prune the partitions earlier - either during the call to the metadata store, or immediately afterwards?

like image 983
Euan Avatar asked May 12 '16 07:05

Euan


People also ask

Can you partition a Parquet file?

An ORC or Parquet file contains data columns. To these files you can add partition columns at write time. The data files do not store values for partition columns; instead, when writing the files you divide them into groups (partitions) based on column values.

Does Spark support Parquet?

Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data. When reading Parquet files, all columns are automatically converted to be nullable for compatibility reasons.

Why does Spark work better with Parquet?

Parquet has higher execution speed compared to other standard file formats like Avro,JSON etc and it also consumes less disk space in compare to AVRO and JSON.


2 Answers

Yes, spark supports partition pruning.

Spark does a listing of partitions directories (sequential or parallel listLeafFilesInParallel) to build a cache of all partitions first time around. The queries in the same application, that scan data takes advantage of this cache. So the slowness that you see could be because of this cache building. The subsequent queries that scan data make use of the cache to prune partitions.

These are the logs which shows partitions being listed to populate the cache.

App > 16/11/14 10:45:24 main INFO ParquetRelation: Listing s3://test-bucket/test_parquet_pruning/month=2015-01 on driver
App > 16/11/14 10:45:24 main INFO ParquetRelation: Listing s3://test-bucket/test_parquet_pruning/month=2015-02 on driver
App > 16/11/14 10:45:24 main INFO ParquetRelation: Listing s3://test-bucket/test_parquet_pruning/month=2015-03 on driver

These are the logs showing pruning is happening.

App > 16/11/10 12:29:16 main INFO DataSourceStrategy: Selected 1 partitions out of 20, pruned 95.0% partitions.

Refer convertToParquetRelation and getHiveQlPartitions in HiveMetastoreCatalog.scala.

like image 144
swatisinghi Avatar answered Oct 18 '22 16:10

swatisinghi


Just a thought:

Spark API documentation for HadoopFsRelation says, ( https://spark.apache.org/docs/1.6.2/api/java/org/apache/spark/sql/sources/HadoopFsRelation.html )

"...when reading from Hive style partitioned tables stored in file systems, it's able to discover partitioning information from the paths of input directories, and perform partition pruning before start reading the data..."

So, i guess "listLeafFilesInParallel" could not be a problem.

A similar issue is already in spark jira: https://issues.apache.org/jira/browse/SPARK-10673

In spite of "spark.sql.hive.verifyPartitionPath" set to false and, there is no effect in performance, I suspect that the issue might have been caused by unregistered partitions. Please list out the partitions of the table and verify if all the partitions are registered. Else, recover your partitions as shown in this link:

Hive doesn't read partitioned parquet files generated by Spark

Update:

  1. I guess appropriate parquet block size and page size were set while writing the data.

  2. Create a fresh hive table with partitions mentioned, and file-format as parquet, load it from non-partitioned table using dynamic partition approach. ( https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions ) Run a plain hive query and then compare by running a spark program.

Disclaimer: I am not a spark/parquet expert. The problem sounded interesting, and hence responded.

like image 27
Marco99 Avatar answered Oct 18 '22 15:10

Marco99