Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

appending to ORC file

Tags:

hadoop

hive

orc

I'm new to Big data and related technologies, so I'm unsure if we can append data to the existing ORC file. I'm writing the ORC file using Java API and when I close the Writer, I'm unable to open the file again to write new content to it, basically to append new data.

Is there a way I can append data to the existing ORC file, either using Java Api or Hive or any other means?

One more clarification, when saving Java util.Date object into ORC file, ORC type is stored as:

struct<timestamp:struct<fasttime:bigint,cdate:struct<cachedyear:int,cachedfixeddatejan1:bigint,cachedfixeddatenextjan1:bigint>>,

and for java BigDecimal it's:

<margin:struct<intval:struct<signum:int,mag:struct<>,bitcount:int,bitlength:int,lowestsetbit:int,firstnonzerointnum:int>

Are these correct and is there any info on this?

like image 282
rpr Avatar asked Oct 16 '25 19:10

rpr


2 Answers

No, you cannot append directly to an ORC file. Nor to a Parquet file. Nor to any columnar format with a complex internal structure with metadata interleaved with data.

Quoting the official "Apache Parquet" site...

Metadata is written after the data to allow for single pass writing.

Then quoting the official "Apache ORC" site...

Since HDFS does not support changing the data in a file after it is written, ORC stores the top level index at the end of the file (...) The file’s tail consists of 3 parts; the file metadata, file footer and postscript.

Well, technically, nowadays you can append to an HDFS file; you can even truncate it. But these tricks are only useful for some edge cases (e.g. Flume feeding messages into an HDFS "log file", micro-batch-wise, with fflush from time to time).

For Hive transaction support they use a different trick: creating a new ORC file on each transaction (i.e. micro-batch) with periodic compaction jobs running in the background, à la HBase.

like image 181
Samson Scharfrichter Avatar answered Oct 18 '25 13:10

Samson Scharfrichter


Update 2017

Yes now you can! Hive provides a new support for ACID, but you can append data to your table using Append Mode mode("append") with Spark

Below an example

Seq((10, 20)).toDF("a", "b").write.mode("overwrite").saveAsTable("tab1")
Seq((20, 30)).toDF("a", "b").write.mode("append").saveAsTable("tab1")
sql("select * from tab1").show

Or a more complete exmple with ORC here; below an extract:

val command = spark.read.format("jdbc").option("url" .... ).load()
command.write.mode("append").format("orc").option("orc.compression","gzip").save("command.orc")
like image 29
venergiac Avatar answered Oct 18 '25 13:10

venergiac



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!