I am creating a column in a DataFrame from several other columns that I want to store as a JSON serialized string. When the serialization to JSON occurs, keys with null values are dropped. Is there a way to keep keys even if the value is null?
Sample program illustrating the issue:
from pyspark.sql import functions as F
df = sc.parallelize([
(1, 10),
(2, 20),
(3, None),
(4, 40),
]).toDF(['id', 'data'])
df.collect()
#[Row(id=1, data=10),
# Row(id=2, data=20),
# Row(id=3, data=None),
# Row(id=4, data=40)]
df_s = df.select(F.struct('data').alias('struct'))
df_s.collect()
#[Row(struct=Row(data=10)),
# Row(struct=Row(data=20)),
# Row(struct=Row(data=None)),
# Row(struct=Row(data=40))]
df_j = df.select(F.to_json(F.struct('data')).alias('json'))
df_j.collect()
#[Row(json=u'{"data":10}'),
# Row(json=u'{"data":20}'),
# Row(json=u'{}'), <= would like this to be u'{"data":null}'
# Row(json=u'{"data":40}')]
Running Spark 2.1.0
Could not find a Spark specific solution so just wrote a udf and used the python json package:
import json
from pyspark.sql import types as T
def to_json(data):
return json.dumps({'data': data})
to_json_udf = F.udf(to_json, T.StringType())
df.select(to_json_udf('data').alias('json')).collect()
# [Row(json=u'{"data": 10}'),
# Row(json=u'{"data": 20}'),
# Row(json=u'{"data": null}'),
# Row(json=u'{"data": 40}')]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With