I have this code that is working well in scala :
val schema = StructType(Array(
StructField("field1", StringType, true),
StructField("field2", TimestampType, true),
StructField("field3", DoubleType, true),
StructField("field4", StringType, true),
StructField("field5", StringType, true)
))
val df = spark.read
// some options
.schema(schema)
.load(myEndpoint)
I want to do something similar in Java. So my code is the following :
final StructType schema = new StructType(new StructField[] {
new StructField("field1", new StringType(), true,new Metadata()),
new StructField("field2", new TimestampType(), true,new Metadata()),
new StructField("field3", new StringType(), true,new Metadata()),
new StructField("field4", new StringType(), true,new Metadata()),
new StructField("field5", new StringType(), true,new Metadata())
});
Dataset<Row> df = spark.read()
// some options
.schema(schema)
.load(myEndpoint);
But this give me the following error :
Exception in thread "main" scala.MatchError: org.apache.spark.sql.types.StringType@37c5b8e8 (of class org.apache.spark.sql.types.StringType)
Nothing seem wrong with my schemas so I don't really know what the problem is here.
spark.read().load(myEndpoint).printSchema();
root
|-- field5: string (nullable = true)
|-- field2: timestamp (nullable = true)
|-- field1: string (nullable = true)
|-- field4: string (nullable = true)
|-- field3: string (nullable = true)
schema.printTreeString();
root
|-- field1: string (nullable = true)
|-- field2: timestamp (nullable = true)
|-- field3: string (nullable = true)
|-- field4: string (nullable = true)
|-- field5: string (nullable = true)
EDIT :
Here is a data sample :
spark.read().load(myEndpoint).show(false);
+---------------------------------------------------------------+-------------------+-------------+--------------+---------+
|field5 |field2 |field1 |field4 |field3 |
+---------------------------------------------------------------+-------------------+-------------+--------------+---------+
|{"fieldA":"AAA","fieldB":"BBB","fieldC":"CCC","fieldD":"DDD"} |2018-01-20 16:54:50|SOME_VALUE |SOME_VALUE |0.0 |
|{"fieldA":"AAA","fieldB":"BBB","fieldC":"CCC","fieldD":"DDD"} |2018-01-20 16:58:50|SOME_VALUE |SOME_VALUE |50.0 |
|{"fieldA":"AAA","fieldB":"BBB","fieldC":"CCC","fieldD":"DDD"} |2018-01-20 17:00:50|SOME_VALUE |SOME_VALUE |20.0 |
|{"fieldA":"AAA","fieldB":"BBB","fieldC":"CCC","fieldD":"DDD"} |2018-01-20 18:04:50|SOME_VALUE |SOME_VALUE |10.0 |
...
+---------------------------------------------------------------+-------------------+-------------+--------------+---------+
There two ways to create Datasets: dynamically and by reading from a JSON file using SparkSession . First, for primitive types in examples or demos, you can create Datasets within a Scala or Python notebook or in your sample Spark application.
Append or Concatenate Datasets Spark provides union() method in Dataset class to concatenate or append a Dataset to another. To append or concatenate two Datasets use Dataset. union() method on the first dataset and provide second Dataset as argument.
Using the static methods and fields from the Datatypes
class instead the constructors worked for me in Spark 2.3.1:
StructType schema = DataTypes.createStructType(new StructField[] {
DataTypes.createStructField("field1", DataTypes.StringType, true),
DataTypes.createStructField("field2", DataTypes.TimestampType, true),
DataTypes.createStructField("field3", DataTypes.StringType, true),
DataTypes.createStructField("field4", DataTypes.StringType, true),
DataTypes.createStructField("field5", DataTypes.StringType, true)
});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With