Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark Structured Streaming Checkpoint Compatibility

Am I safe to use Kafka and Spark Structured Streaming (SSS) (>=v2.2) with checkpointing on HDFS in cases where I have to upgrade the Spark library or when changing the query? I'd like to seamlessly continue with the offset left behind even in those cases.

I've found different answers when searching the net for compatibility issues in SSS's (>=2.2) checkpoint mechanism. Maybe someone out there can lighten up the situation ... in best case backed up with facts/references or first-person experience ?

  1. In Spark's programming guide (current=v2.3) they just claim "..should be a directory in an HDFS-compatible" but don't even leave a single word about constraints in terms of compatibility. https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
  2. Databricks at least gives some hints that this is an issue at all. https://docs.databricks.com/spark/latest/structured-streaming/production.html#recover-after-changes-in-a-streaming-query
  3. A Cloudera blog recommends storing the offset rather in Zookeeper, but this actually refers to the "old" Spark Streaming implementation. If this is relates to structured streaming, too, is unclear. https://blog.cloudera.com/blog/2017/06/offset-management-for-apache-kafka-with-apache-spark-streaming/
  4. A guy in this conversation claims that there is no problem on that regard anymore ...but without pointing to facts. How to get Kafka offsets for structured query for manual and reliable offset management?

Help is highly appreciated.

like image 698
BigTuna Avatar asked Oct 25 '18 10:10

BigTuna


1 Answers

Checkpoints are great when you don't need to change the code, fire and forget procedure are perfect use cases.

I read the post from Databricks you posted, the truth is that you can't know what kind of changes are called to do until you have to do them. I wonder how they can predict the future.

About the link on Cloudera, yes they are speaking about the old procedure, but with Structured Streaming still code changes void your checkpoints.

So, in my opinion, so much automation is good for Fire and Forget procedure. If this is not your case, saving the Kafka offset elsewhere is a good way to restart from where you left last time; you know that Kafka can contain a lot of data and restart from zero to avoid data loss or accept the idea to restart from the latest offset sometimes is not always acceptable.

Remember: Any stream logic change will be ignored as long as there are checkpoints, so you can't make change to your job once deployed, unless you accept the idea to throwing away the checkpoints. By throwing away the checkpoints you must force the job to reprocess the entire Kafka topic (earliest), or start right at the end (latest) skipping unprocessed data.

It's great, is it not?

like image 185
ozw1z5rd Avatar answered Sep 26 '22 15:09

ozw1z5rd