Sorry if it sounds vague but can one explain the steps to writing an existing DataFrame "df" into MySQL table say "product_mysql" and the other way around.
please see this databricks article : Connecting to SQL Databases using JDBC.
import org.apache.spark.sql.SaveMode
val df = spark.table("...")
println(df.rdd.partitions.length)
// given the number of partitions above, users can reduce the partition value by calling coalesce() or increase it by calling repartition() to manage the number of connections.
df.repartition(10).write.mode(SaveMode.Append).jdbc(jdbcUrl, "product_mysql", connectionProperties)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With