Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark - Reading partitioned data from S3 - how does partitioning happen?

When I use Spark to read multiple files from S3 (e.g. a directory with many Parquet files) -
Does the logical partitioning happen at the beginning, then each executor downloads the data directly (on the worker node)?
Or does the driver download the data (partially or fully) and only then partitions and sends the data to the executors?

Also, will the partitioning default to the same partitions that were used for write (i.e. each file = 1 partition)?

like image 691
user976850 Avatar asked Nov 11 '18 09:11

user976850


Video Answer


1 Answers

Data on S3 is external to HDFS obviously.

You can read from S3 by providing a path, or paths, or using Hive Metastore - if you have updated this via creating DDL for External S3 table, and using MSCK for partitions, or ALTER TABLE table_name RECOVER PARTITIONS for Hive on EMR.

If you use:

val df = spark.read.parquet("/path/to/parquet/file.../...")

then there is no guarantee on partitioning and it depends on various settings - see Does Spark maintain parquet partitioning on read?, noting APIs evolve and get better.

But, this:

val df = spark.read.parquet("/path/to/parquet/file.../.../partitioncolumn=*")

will return partitions over executors in some manner as per your saved partition structure, a bit like SPARK bucketBy.

The Driver only gets the metadata if specifying S3 directly.

In your terms:

  • "... each executor downloads the data directly (on the worker node)? " YES
  • Metadata is gotten in some way with Driver coordination and other system components for file / directory locations on S3, but not that the data is first downloaded to Driver - that would be a big folly in design. But it depends also on format of statement how the APIs respond.
like image 145
thebluephantom Avatar answered Oct 23 '22 12:10

thebluephantom