Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Read local Parquet file without Hadoop Path API

I'm trying to read a local Parquet file, however the only APIs I can find are tightly coupled with Hadoop, and require a Hadoop Path as input (even for pointing to a local file).

ParquetReader<GenericRecord> reader = AvroParquetReader.<GenericRecord>builder(file).build();
GenericRecord nextRecord = reader.read();

is the most popular answer in how to read a parquet file, in a standalone java code?, but requires a Hadoop Path and has now been deprecated for a mysterious InputFile instead. The only implementation of InputFile I can find is HadoopInputFile, so again no help.

In Avro this is a simple:

DatumReader<GenericRecord> datumReader = new GenericDatumReader<>();
this.dataFileReader = new DataFileReader<>(file, datumReader);

(where file is java.io.File). What's the Parquet equivalent?

I am asking for no Hadoop Path dependency in the answers, because Hadoop drags in bloat and jar hell, and it seems silly to require it for reading local files.

To further explain the backstory, I maintain a small IntelliJ plugin that allows users to drag-and-drop Avro files into a pane for viewing in a table. This plugin is currently 5MB. If I include Parquet and Hadoop dependencies, it bloats to over 50MB, and doesn't even work.


POST-ANSWER ADDENDUM

Now that I have it working (thanks to the accepted answer), here is my working solution that avoids all the annoying errors that can be dragged in by depending heavily on the Hadoop Path API:

  • ParquetFileReader.java
  • LocalInputFile.java
like image 680
Ben Watson Avatar asked Jan 27 '20 21:01

Ben Watson


People also ask

How read a Parquet file in a standalone Java code?

ParquetReader<GenericRecord> reader = AvroParquetReader. <GenericRecord>builder(file). build(); GenericRecord nextRecord = reader. read();

Can we read a Parquet file?

Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.

Can Python read Parquet files?

Since it was developed as part of the Hadoop ecosystem, Parquet's reference implementation is written in Java. This package aims to provide a performant library to read and write Parquet files from Python, without any need for a Python-Java bridge.


1 Answers

Unfortunately the java parquet implementation is not independent of some hadoop libraries. There is an existing issue in their bugtracker to make it easy to read and write parquet files in java without depending on hadoop but there does not seem to be much progress on it. The InputFile interface was added to add a bit of decoupling, but a lot of the classes that implement the metadata part of parquet and also all compression codecs live inside the hadoop dependency.

I found another implementation of InputFile in the smile library, this might be more efficient than going through the hadoop filesystem abstraction, but does not solve the dependency problem.

As other answers already mention, you can create an hadoop Path for a local file and use that without problems.

java.io.File file = ...
new org.apache.hadoop.fs.Path(file.toURI())

The dependency tree that is pulled in by hadoop can be reduced a lot by defining some exclusions. I'm using the following to reduce the bloat (using gradle syntax):

compile("org.apache.hadoop:hadoop-common:3.1.0") {
    exclude(group: 'org.slf4j')
    exclude(group: 'org.mortbay.jetty')
    exclude(group: 'javax.servlet.jsp')
    exclude(group: 'com.sun.jersey')
    exclude(group: 'log4j')
    exclude(group: 'org.apache.curator')
    exclude(group: 'org.apache.zookeeper')
    exclude(group: 'org.apache.kerby')
    exclude(group: 'com.google.protobuf')
}
like image 92
Jörn Horstmann Avatar answered Sep 28 '22 15:09

Jörn Horstmann