Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark: Avro vs Parquet performance

Now that Spark 2.4 has built-in support for Avro format, I'm considering changing the format of some of the data sets in my data lake - those that are usually queried/joined for entire rows rather than specific column aggregations - from Parquet to Avro.

However, most of the work on top of the data is done via Spark, and to my understanding, Spark's in-memory caching and computations are done on columnar-formatted data. Does Parquet offer a performance boost in this regard, while Avro would incur some sort of data "transformation" penalty? What other considerations should I be aware of in this regard?

like image 479
user976850 Avatar asked Jan 02 '23 13:01

user976850


1 Answers

Both formats shine under different constraints but have things like strong types with schemas and a binary encoding in common. In its basic form it boils down to this differentiation:

  • Avro is a row-wise format. From this it follows that you can append row-by-row to an existing file. These row-wise appends are then also immediately visible to all readers that work on these files. Avro is best when you have a process that writes into your data lake in a streaming (non-batch) fashion.
  • Parquet is a columnar format and its files are not appendable. This means that for new arriving records, you must always create new files. In exchange for this behaviour Parquet brings several benefits. Data is stored in a columnar fashion and compression and encoding (simple type-aware, low-cpu but highly effective compression) is applied to each column. Thus Parquet files will be much smaller than Avro files. Also Parquet writes out basic statistics that when you load data from it, you can push down parts of your selection to the I/O. Then only the necessary set of rows is loaded from disk. As Parquet is already in a columnar fashion and most in-memory structures will also be columnar, loading data from them is in general much faster.

As you already have your data and the ingestion process tuned to write Parquet files, it's probably best for you to stay with Parquet as long as data ingestion (latency) does not become a problem for you.

A typical usage is actually to have a mix of Parquet and Avro. Recent, freshly arrived data is stored as Avro files as this makes the data immediately available to the data lake. More historic data is transformed on e.g. a daily basis into Parquet files as they are smaller and most efficient to load but can only be written in batches. While working with this data, you would load both into Spark as a union of two tables. Thus you have the benefit of efficient reads with Parquet combined with the immediate availability of data with Avro. This pattern is often hidden by table formats like Uber's Hudi or Apache Iceberg (incubating) which was started by Netflix.

like image 54
Uwe L. Korn Avatar answered Feb 28 '23 19:02

Uwe L. Korn