I have an uncompressed Parquet file which has "crawler log" sort of data.
I import it into Spark via PySpark as
sq = SQLContext(sc)
p = sq.read.parquet('/path/to/stored_as_parquet/table/in/hive')
p.take(1).show()
This shows strings in the source data converted to
Row(host=bytearray(b'somehostname'), (checksum=bytearray(b'stuff'))...)
When I do p.dtypes I see
((host binary), (checksum binary) ....).
What can I do to avoid this conversion or alternately how do I convert back to what I need
i.e. when I do p.dtypes I want to see
((host string), (checksum string) ....)
Thanks.
I ran into the same problem. Adding
sqlContext.setConf("spark.sql.parquet.binaryAsString","true")
right after creating my SqlContext, solved it for me.
For people using SparkSession
it is:
spark = SparkSession.builder.config('spark.sql.parquet.binaryAsString', 'true').getOrCreate().newSession()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With