Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

write a spark Dataset to json with all keys in the schema, including null columns

I am writing a dataset to json using:

ds.coalesce(1).write.format("json").option("nullValue",null).save("project/src/test/resources")

For records that have columns with null values, the json document does not write that key at all.

Is there a way to enforce null value keys to the json output?

This is needed since I use this json to read it onto another dataset (in a test case) and cannot enforce a schema if some documents do not have all the keys in the case class (I am reading it by putting the json file under resources folder and transforming to a dataset via RDD[String], as explained here: https://databaseline.bitbucket.io/a-quickie-on-reading-json-resource-files-in-apache-spark/)

like image 266
Nir Ben Yaacov Avatar asked Jul 21 '17 10:07

Nir Ben Yaacov


1 Answers

I agree with @philantrovert.

ds.na.fill("")
  .coalesce(1)
  .write
  .format("json")
  .save("project/src/test/resources")

Since DataSets are immutable you are not altering the data in ds and you can process it (complete with null values and all) in any following code. You are simply replacing null values with an empty string in the saved file.

like image 66
Glennie Helles Sindholt Avatar answered Sep 20 '22 00:09

Glennie Helles Sindholt