Below is the code
def findUniqueGroupInMetadata(sc: SparkContext): Unit = {
val merchantGroup = sc.cassandraTable("local_pb", "merchant_metadata").select("group_name")
try {
val filterByWithGroup = merchantGroup.filter {
row =>
row.getStringOption("group_name") match {
case Some(s: String) if (s != null) => true
case None => false
}
}.map(row => row.getStringOption("group_name").get.capitalize)
//filterByWithGroup.take(15).foreach(data => println("merchantGroup => " + data))
filterByWithGroup.saveToCassandra("local_pb", "merchant_group", SomeColumns("group_name"))
} catch {
case e: Exception => println(e.printStackTrace())
}
}
Exception =>
java.lang.IllegalArgumentException: Multiple constructors with the same number of parameters not allowed.
at com.datastax.spark.connector.util.Reflect$.methodSymbol(Reflect.scala:16)
at com.datastax.spark.connector.util.ReflectionUtil$.constructorParams(ReflectionUtil.scala:63)
at com.datastax.spark.connector.mapper.DefaultColumnMapper.<init>(DefaultColumnMapper.scala:45)
at com.datastax.spark.connector.mapper.LowPriorityColumnMapper$class.defaultColumnMapper(ColumnMapper.scala:47)
at com.datastax.spark.connector.mapper.ColumnMapper$.defaultColumnMapper(ColumnMapper.scala:51)
I found the answer after looking into some blogs.
When I converted the RDD[String] to RDD[Tuple1[String]] everything went smooth. So basically in order to save data to Cassandra, data needs to be of type RDD[TupleX[String]] here x can be 1,2,3... or data can be RDD[SomeCaseClass]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With