Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to load specific Hive partition in DataFrame Spark 1.6?

Spark 1.6 onwards as per the official doc we cannot add specific hive partitions to DataFrame

Till Spark 1.5 the following used to work and the dataframe would have entity column and the data, as shown below:

DataFrame df = hiveContext.read().format("orc").load("path/to/table/entity=xyz")

However, this would not work in Spark 1.6.

If I give base path like the following it does not contain entity column which I want in DataFrame, as shown below -

DataFrame df = hiveContext.read().format("orc").load("path/to/table/") 

How do I load specific hive partition in a dataframe? What was the driver behind removing this feature?

I believe it was efficient. Is there an alternative to achieve that in Spark 1.6?

As per my understanding, Spark 1.6 loads all partitions and if I filter for specific partitions it is not efficient, it hits memory and throws GC(Garbage Collection) errors because of thousands of partitions get loaded into memory and not the specific partition.

like image 968
Umesh K Avatar asked Jan 07 '16 15:01

Umesh K


People also ask

How do I choose a partition column in hive?

The ideal choice is to have state as partitioning column as partitioning creates distinct folders based on distinct values. Hence number of folders = number of states and so the metadata information storage to Namenode would be less.

Can we alter partition in hive?

Hive ALTER TABLE command is used to update or drop a partition from a Hive Metastore and HDFS location (managed table). You can also manually update or drop a Hive partition directly on HDFS using Hadoop commands, if you do so you need to run the MSCK command to synch up HDFS files with Hive Metastore.

How do I choose a Spark partition?

The lower bound for spark partitions is determined by 2 X number of cores in the cluster available to application. Determining the upper bound for partitions in Spark, the task should take 100+ ms time to execute.

How do I transfer data from one partition to another in hive?

Below is my explanation of how you can do it. #From Hive/Beeline ALTER TABLE TableName PARTITION (PartitionCol=2018-12-31) RENAME TO PARTITION (PartitionCol=2017-12-31); FromSparkCode, You basically have to initiate the hiveContext and run the same HQL from it.


1 Answers

To add specific partition in a DataFrame using Spark 1.6 we have to do the following first set basePath and then give path of partition needs to be loaded

DataFrame df = hiveContext.read().format("orc").
               option("basePath", "path/to/table/").
               load("path/to/table/entity=xyz")

So above code will load only specific partition in a DataFrame.

like image 149
Umesh K Avatar answered Oct 03 '22 09:10

Umesh K