Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I read a parquet in PySpark written from Spark?

I am using two Jupyter notebooks to do different things in an analysis. In my Scala notebook, I write some of my cleaned data to parquet:

partitionedDF.select("noStopWords","lowerText","prediction").write.save("swift2d://xxxx.keystone/commentClusters.parquet")

I then go to my Python notebook to read in the data:

df = spark.read.load("swift2d://xxxx.keystone/commentClusters.parquet")

and I get the following error:

AnalysisException: u'Unable to infer schema for ParquetFormat at swift2d://RedditTextAnalysis.keystone/commentClusters.parquet. It must be specified manually;'

I have looked at the spark documentation and I don't think I should be required to specify a schema. Has anyone run into something like this? Should I be doing something else when I save/load? The data is landing in Object Storage.

edit: I'm sing spark 2.0 in both the read and the write.

edit2: This was done in a project in Data Science Experience.

like image 892
Ross Lewis Avatar asked Mar 24 '17 04:03

Ross Lewis


People also ask

How do you read data from Parquet?

Data inside a Parquet file is similar to an RDBMS style table where you have columns and rows. But instead of accessing the data one row at a time, you typically access it one column at a time. Apache Parquet is one of the modern big data storage formats.


2 Answers

I use the following two ways to read the parquet file:

Initialize Spark Session:

from pyspark.sql import SparkSession
spark = SparkSession.builder \
    .master('local') \
    .appName('myAppName') \
    .config('spark.executor.memory', '5gb') \
    .config("spark.cores.max", "6") \
    .getOrCreate()

Method 1:

df = spark.read.parquet('path-to-file/commentClusters.parquet')

Method 2:

sc = spark.sparkContext

# using SQLContext to read parquet file
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)

# read parquet file
df = sqlContext.read.parquet('path-to-file/commentClusters.parquet')
like image 88
Jeril Avatar answered Oct 04 '22 08:10

Jeril


You can use parquet format of Spark Session to read parquet files. Like this:

df = spark.read.parquet("swift2d://xxxx.keystone/commentClusters.parquet")

Although, there is no difference between parquet and load functions. It might be the case that load is not able to infer the schema of data in the file (eg, some data type which is not identifiable by load or specific to parquet).

like image 44
himanshuIIITian Avatar answered Oct 04 '22 07:10

himanshuIIITian