Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark: Dataframe Serialization

I have 2 questions regarding Spark serialization that I can simply find no answers to by googling.

  1. How can I print out the name of the currently used serializer; I want to know whether spark.serializer is Java or Kryo.
  2. I have the following code which is supposed to use Kryo serialization; the memory size used for the dataframe becomes 21meg which is a quarter of when I was just caching with no serialization; but when I remove the Kryo configuration, the size remains the same 21meg. Does this mean Kryo was never used in the first place? Could it be that because the records in the dataframe are simply rows, both Java and Kryo serialization are the same size?

    val conf = new SparkConf()    
    conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")    
    conf.set("spark.kryo.registrationRequired", "false")    
    val spark = SparkSession.builder.master("local[*]").config(conf)
           .appName("KryoWithRegistrationNOTRequired").getOrCreate    
    val df = spark.read.csv("09-MajesticMillion.csv")    
    df.persist(StorageLevel.MEMORY_ONLY_SER)
    
like image 238
user1888243 Avatar asked Dec 26 '17 19:12

user1888243


1 Answers

Does this mean Kryo was never used in the first place?

It means exactly it. Spark SQL (Dataset) uses it's own columnar storage for caching. No Java or Kryo serialization is used therefore spark.serializer has no impact at all.

like image 85
user9142754 Avatar answered Oct 18 '22 02:10

user9142754