Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to create a schema from CSV file and persist/save that schema to a file?

I have CSV file with 10 columns. Half String and half are Integers.

What is the Scala code to:

  • Create (infer) the schema
  • Save that schema to a file

I have this so far:

import org.apache.spark.sql.SQLContext

val sqlContext = new SQLContext(sc)
val df = sqlContext.read
    .format("com.databricks.spark.csv")
    .option("header", "true") // Use first line of all files as header
    .option("inferSchema", "true") // Automatically infer data types
    .load("cars.csv")

And what is the best file format for saving that schema? Is it JSON?

Goal is - I want to create schema only once and next time load from a file instead of re-creating it on a fly.

Thanks.

like image 907
Joe Avatar asked Feb 06 '23 04:02

Joe


1 Answers

DataType API provided all the required utilities so JSON is a natural choice:

import org.apache.spark.sql.types._
import scala.util.Try

val df = Seq((1L, "foo", 3.0)).toDF("id", "x1", "x2")
val serializedSchema: String = df.schema.json


def loadSchema(s: String): Option[StructType] =
  Try(DataType.fromJson(s)).toOption.flatMap {
    case s: StructType => Some(s)
    case _ => None 
  }

loadSchema(serializedSchema)

Depending on you requirements you can use standard Scala methods to write this to file, or hack Spark RDD:

val schemaPath: String = ???

sc.parallelize(Seq(serializedSchema), 1).saveAsTextFile(schemaPath)
val loadedSchema: Option[StructType] = sc.textFile(schemaPath)
  .map(loadSchema)  // Load
  .collect.headOption.flatten  // Make sure we don't fail if there is no data

For a Python equivalent see Config file to define JSON Schema Struture in PySpark

like image 150
zero323 Avatar answered Feb 08 '23 01:02

zero323