Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

save Spark dataframe to Hive: table not readable because "parquet not a SequenceFile"

I'd like to save data in a Spark (v 1.3.0) dataframe to a Hive table using PySpark.

The documentation states:

"spark.sql.hive.convertMetastoreParquet: When set to false, Spark SQL will use the Hive SerDe for parquet tables instead of the built in support."

Looking at the Spark tutorial, is seems that this property can be set:

from pyspark.sql import HiveContext

sqlContext = HiveContext(sc)
sqlContext.sql("SET spark.sql.hive.convertMetastoreParquet=false")

# code to create dataframe

my_dataframe.saveAsTable("my_dataframe")

However, when I try to query the saved table in Hive it returns:

hive> select * from my_dataframe;
OK
Failed with exception java.io.IOException:java.io.IOException: 
hdfs://hadoop01.woolford.io:8020/user/hive/warehouse/my_dataframe/part-r-00001.parquet
not a SequenceFile

How do I save the table so that it's immediately readable in Hive?

like image 932
Alex Woolford Avatar asked Jul 17 '15 18:07

Alex Woolford


People also ask

Does spark support parquet?

Parquet is a columnar format that is supported by many other data processing systems. Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema of the original data.

What is the advantage of storing data frames in parquet files in Sparksql?

Benefits of Storing as a Parquet file:Low storage consumption. Efficient in reading Data in less time as it is columnar storage and minimizes latency. Supports advanced nested data structures. Optimized for queries that process large volumes of data.

Does parquet save data type?

Parquet files automatically preserves the schema of the original data when saving. So there will be no difference if it's Spark or another system that writes/reads the data.


2 Answers

I've been there...
The API is kinda misleading on this one.
DataFrame.saveAsTable does not create a Hive table, but an internal Spark table source.
It also stores something into Hive metastore, but not what you intend.
This remark was made by spark-user mailing list regarding Spark 1.3.

If you wish to create a Hive table from Spark, you can use this approach:
1. Use Create Table ... via SparkSQL for Hive metastore.
2. Use DataFrame.insertInto(tableName, overwriteMode) for the actual data (Spark 1.3)

like image 131
Leet-Falcon Avatar answered Sep 28 '22 06:09

Leet-Falcon


I hit this issue last week and was able to find a workaround

Here's the story: I can see the table in Hive if I created the table without partitionBy:

spark-shell>someDF.write.mode(SaveMode.Overwrite)
                  .format("parquet")
                  .saveAsTable("TBL_HIVE_IS_HAPPY")

hive> desc TBL_HIVE_IS_HAPPY;
      OK
      user_id                   string                                      
      email                     string                                      
      ts                        string                                      

But Hive can't understand the table schema(schema is empty...) if I do this:

spark-shell>someDF.write.mode(SaveMode.Overwrite)
                  .format("parquet")
                  .saveAsTable("TBL_HIVE_IS_NOT_HAPPY")

hive> desc TBL_HIVE_IS_NOT_HAPPY;
      # col_name                data_type               from_deserializer  

[Solution]:

spark-shell>sqlContext.sql("SET spark.sql.hive.convertMetastoreParquet=false")
spark-shell>df.write
              .partitionBy("ts")
              .mode(SaveMode.Overwrite)
              .saveAsTable("Happy_HIVE")//Suppose this table is saved at /apps/hive/warehouse/Happy_HIVE


hive> DROP TABLE IF EXISTS Happy_HIVE;
hive> CREATE EXTERNAL TABLE Happy_HIVE (user_id string,email string,ts string)
                                       PARTITIONED BY(day STRING)
                                       STORED AS PARQUET
                                       LOCATION '/apps/hive/warehouse/Happy_HIVE';
hive> MSCK REPAIR TABLE Happy_HIVE;

The problem is that the datasource table created through Dataframe API(partitionBy+saveAsTable) is not compatible with Hive.(see this link). By setting spark.sql.hive.convertMetastoreParquet to false as suggested in the doc, Spark only puts data onto HDFS,but won't create table on Hive. And then you can manually go into hive shell to create an external table with proper schema&partition definition pointing to the data location. I've tested this in Spark 1.6.1 and it worked for me. I hope this helps!

like image 35
Yuan Zhao Avatar answered Sep 28 '22 08:09

Yuan Zhao