Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark import of Parquet files converts strings to bytearray

I have an uncompressed Parquet file which has "crawler log" sort of data.

I import it into Spark via PySpark as

sq = SQLContext(sc) p = sq.read.parquet('/path/to/stored_as_parquet/table/in/hive') p.take(1).show()

This shows strings in the source data converted to

Row(host=bytearray(b'somehostname'), (checksum=bytearray(b'stuff'))...)

When I do p.dtypes I see

((host binary), (checksum binary) ....).

What can I do to avoid this conversion or alternately how do I convert back to what I need

i.e. when I do p.dtypes I want to see

((host string), (checksum string) ....)

Thanks.

like image 740
Nitin Avatar asked Sep 02 '15 04:09

Nitin


2 Answers

I ran into the same problem. Adding

sqlContext.setConf("spark.sql.parquet.binaryAsString","true")

right after creating my SqlContext, solved it for me.

like image 35
uuazed Avatar answered Sep 27 '22 02:09

uuazed


For people using SparkSession it is:

spark = SparkSession.builder.config('spark.sql.parquet.binaryAsString', 'true').getOrCreate().newSession()
like image 96
mattigrthr Avatar answered Sep 26 '22 02:09

mattigrthr