Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to get the column names and their datatypes of parquet file using pyspark?

i have a parquet file on my hadoop cluster ,i want to capture the column names and their datatypes and write it on a textfile.how to get the column names and their datatypes of parquet file using pyspark.

like image 317
Shubham Mishra Avatar asked Jan 09 '16 15:01

Shubham Mishra


2 Answers

You can simply read the file and use schema to access individual fields:

sqlContext.read.parquet(path_to_parquet_file).schema.fields
like image 110
zero323 Avatar answered Sep 25 '22 04:09

zero323


Use dataframe.printSchema() - Prints out the schema in the tree format.

df.printSchema() root |-- age: integer (nullable = true) |-- name: string (nullable = true)

You can redirect the output of your program and capture that in a text file.

like image 34
tranquilram Avatar answered Sep 24 '22 04:09

tranquilram