Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark DataFrame ORC Hive table reading issue

I am trying to read a Hive table in Spark. Below is the Hive Table format:

# Storage Information       
SerDe Library:  org.apache.hadoop.hive.ql.io.orc.OrcSerde   
InputFormat:    org.apache.hadoop.hive.ql.io.orc.OrcInputFormat 
OutputFormat:   org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat    
Compressed: No  
Num Buckets:    -1  
Bucket Columns: []  
Sort Columns:   []  
Storage Desc Params:        
    field.delim \u0001
    serialization.format    \u0001

When I am trying to read it using the Spark SQL with the below command:

val c = hiveContext.sql("""select  
        a
    from c_db.c cs 
    where dt >=  '2016-05-12' """)
c. show

I am getting the below warning:-

18/07/02 18:02:02 WARN ReaderImpl: Cannot find field for: a in _col0, _col1, _col2, _col3, _col4, _col5, _col6, _col7, _col8, _col9, _col10, _col11, _col12, _col13, _col14, _col15, _col16, _col17, _col18, _col19, _col20, _col21, _col22, _col23, _col24, _col25, _col26, _col27, _col28, _col29, _col30, _col31, _col32, _col33, _col34, _col35, _col36, _col37, _col38, _col39, _col40, _col41, _col42, _col43, _col44, _col45, _col46, _col47, _col48, _col49, _col50, _col51, _col52, _col53, _col54, _col55, _col56, _col57, _col58, _col59, _col60, _col61, _col62, _col63, _col64, _col65, _col66, _col67,

The read starts but it is very slow and getting network time out.

When i am trying to read the Hive table directory directly i am getting the below error.

val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.setConf("spark.sql.orc.filterPushdown", "true") 
val c = hiveContext.read.format("orc").load("/a/warehouse/c_db.db/c")
c.select("a").show()

org.apache.spark.sql.AnalysisException: cannot resolve 'a' given input columns: [_col18, _col3, _col8, _col66, _col45, _col42, _col31, _col17, _col52, _col58, _col50, _col26, _col63, _col12, _col27, _col23, _col6, _col28, _col54, _col48, _col33, _col56, _col22, _col35, _col44, _col67, _col15, _col32, _col9, _col11, _col41, _col20, _col2, _col25, _col24, _col64, _col40, _col34, _col61, _col49, _col14, _col13, _col19, _col43, _col65, _col29, _col10, _col7, _col21, _col39, _col46, _col4, _col5, _col62, _col0, _col30, _col47, trans_dt, _col57, _col16, _col36, _col38, _col59, _col1, _col37, _col55, _col51, _col60, _col53]; at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)

I can convert the Hive table to TextInputFormat but that should be my last option as i would like to get the benefit of OrcInputFormat to compress the table size.

Really appreciate your suggestion.

like image 845
Subhasis Avatar asked Feb 28 '26 17:02

Subhasis


1 Answers

I found workaround with reading table such way:

val schema = spark.table("db.name").schema

spark.read.schema(schema).orc("/path/to/table")
like image 87
K. Kostikov Avatar answered Mar 02 '26 10:03

K. Kostikov



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!