I have 2 questions regarding Spark serialization that I can simply find no answers to by googling.
I have the following code which is supposed to use Kryo serialization; the memory size used for the dataframe becomes 21meg which is a quarter of when I was just caching with no serialization; but when I remove the Kryo configuration, the size remains the same 21meg. Does this mean Kryo was never used in the first place? Could it be that because the records in the dataframe are simply rows, both Java and Kryo serialization are the same size?
val conf = new SparkConf()
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
conf.set("spark.kryo.registrationRequired", "false")
val spark = SparkSession.builder.master("local[*]").config(conf)
.appName("KryoWithRegistrationNOTRequired").getOrCreate
val df = spark.read.csv("09-MajesticMillion.csv")
df.persist(StorageLevel.MEMORY_ONLY_SER)
Does this mean Kryo was never used in the first place?
It means exactly it. Spark SQL (Dataset
) uses it's own columnar storage for caching. No Java or Kryo serialization is used therefore spark.serializer
has no impact at all.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With