Having Dataset<Row>
of single column of json strings:
+--------------------+
| value|
+--------------------+
|{"Context":"00AA0...|
+--------------------+
Json sample:
{"Context":"00AA00AA","MessageType":"1010","Module":"1200"}
How can I most efficiently get Dataset<Row>
that looks like this:
+--------+-----------+------+
| Context|MessageType|Module|
+--------+-----------+------+
|00AA00AA| 1010| 1200|
+--------+-----------+------+
I'm processing those data in stream, i know that spark can do this by him self when i'm reading it from a file:
spark
.readStream()
.schema(MyPojo.getSchema())
.json("src/myinput")
but now i'm reading data from kafka and it gives me data in another form. I know that i can use some parsers like Gson, but i would like to let spark to do it for me.
Try this sample.
public class SparkJSONValueDataset {
public static void main(String[] args) {
SparkSession spark = SparkSession
.builder()
.appName("SparkJSONValueDataset")
.config("spark.sql.warehouse.dir", "/file:C:/temp")
.master("local")
.getOrCreate();
//Prepare data Dataset<Row>
List<String> data = Arrays.asList("{\"Context\":\"00AA00AA\",\"MessageType\":\"1010\",\"Module\":\"1200\"}");
Dataset<Row> df = spark.createDataset(data, Encoders.STRING()).toDF().withColumnRenamed("_1", "value");
df.show();
//convert to Dataset<String> and Read
Dataset<String> df1 = df.as(Encoders.STRING());
Dataset<Row> df2 = spark.read().json(df1.javaRDD());
df2.show();
spark.stop();
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With