Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Convert file of JSON objects to Parquet file

Motivation: I want to load the data into Apache Drill. I understand that Drill can handle JSON input, but I want to see how it performs on Parquet data.

Is there any way to do this without first loading the data into Hive, etc and then using one of the Parquet connectors to generate an output file?

like image 360
danieltahara Avatar asked Feb 11 '14 00:02

danieltahara


People also ask

Can you convert JSON to Parquet?

You can use Coiled, the cloud-based Dask platform, to easily convert large JSON data into a tabular DataFrame stored as Parquet in a cloud object-store. Start off by iterating with Dask locally first to build and test your pipeline, then transfer the same workflow to Coiled with minimal code changes.

Which of the following transforms the JSON data to a Parquet file?

You can use sparkSQL to read first the JSON file into an DataFrame, then writing the DataFrame as parquet file.

Does Parquet support JSON?

Nested types can be stored in: Parquet, where you can have multiple complex columns that contain arrays and objects. Hierarchical JSON files, where you can read a complex JSON document as a single column.


3 Answers

Kite has support for importing JSON to both Avro and Parquet formats via its command-line utility, kite-dataset.

First, you would infer the schema of your JSON:

kite-dataset json-schema sample-file.json -o schema.avsc

Then you can use that file to create a Parquet Hive table:

kite-dataset create mytable --schema schema.avsc --format parquet

And finally, you can load your JSON into the dataset.

kite-dataset json-import sample-file.json mytable

You can also import an entire directly stored in HDFS. In that case, Kite will use a MR job to do the import.

like image 134
blue Avatar answered Sep 28 '22 11:09

blue


You can actually use Drill itself to create a parquet file from the output of any query.

create table student_parquet as select * from `student.json`;

The above line should be good enough. Drill interprets the types based on the data in the fields. You can substitute your own query and create a parquet file.

like image 33
rahul Avatar answered Sep 28 '22 13:09

rahul


To complete the answer of @rahul, you can use drill to do this - but I needed to add more to the query to get it working out of the box with drill.

create table dfs.tmp.`filename.parquet` as select * from dfs.`/tmp/filename.json` t

I needed to give it the storage plugin (dfs) and the "root" config can read from the whole disk and is not writable. But the tmp config (dfs.tmp) is writable and writes to /tmp. So I wrote to there.

But the problem is that if the json is nested or perhaps contains unusual characters, I would get a cryptic

org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: java.lang.IndexOutOfBoundsException:

If I have a structure that looks like members: {id:123, name:"joe"} I would have to change the select to

select members.id as members_id, members.name as members_name

or

select members.id as `members.id`, members.name as `members.name`

to get it to work.

I assume the reason is that parquet is a "column" store so you need columns. JSON isn't by default so you need to convert it.

The problem is I have to know my json schema and I have to build the select to include all the possibilities. I'd be happy if some knows a better way to do this.

like image 40
Yehosef Avatar answered Sep 28 '22 11:09

Yehosef