Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Spark-Streaming Kafka Direct Streaming API & Parallelism

I understood the automated mapping that exists between a Kafka Partition and a Spark RDD partition and ultimately Spark Task. However in order to properly Size My Executor (in number of Core) and therefore ultimately my node and cluster, I need to understand something that seems to be glossed over in the documentations.

In Spark-Streaming how does exactly work the data consumption vs data processing vs task allocation, in other words:

  1. Does a corresponding Spark task to a Kafka partition both read and process the data altogether ?
  • The rational behind this question is that in the previous API, that is, the receiver based, a TASK was dedicated for receiving the data, meaning a number tasks slot of your executors were reserved for data ingestion and the other were there for processing. This had an impact on how you size your executor in term of cores.

  • Take for example the advise on how to launch spark-streaming with
    --master local. Everyone would tell that in the case of spark streaming, one should put local[2] minimum, because one of the core, will be dedicated to running the long receiving task that never ends, and the other core will do the data processing.

  • So if the answer is that in this case, the task does both the reading and the processing at once, then the question that follows, is that
    really smart, i mean, this sounds like asynchronous. We want to be
    able to fetch while we process so on the next processing the data is already there. However if there only one core or more precisely to
    both read the data and process them, how can both be done in
    parallel, and how does that make things faster in general.

  • My original understand was that, things would have remain somehow the same in the sense that, a task would be launch to read but that the
    processing would be done in another task. That would mean that, if
    the processing task is not done yet, we can still keep reading, until a certain memory limit.

Can someone outline with clarity what is exactly going on here ?

EDIT1

We don't even have to have this memory limit control. Just the mere fact of being able to fetch while the processing is going on and stopping right there. In other words, the two process should be asynchronous and the limit is simply to be one step ahead. To me if somehow this is not happening, i find it extremely strange that Spark would implement something that break performance as such.

like image 377
MaatDeamon Avatar asked Aug 05 '17 21:08

MaatDeamon


People also ask

Which of the following API is used by Spark streaming?

Spark Streaming is an extension of the core Spark API that allows data engineers and data scientists to process real-time data from various sources including (but not limited to) Kafka, Flume, and Amazon Kinesis. This processed data can be pushed out to file systems, databases, and live dashboards.

Can Spark work with Kafka?

Spark Streaming Kafka 0.10. This is currently in an experimental state and is compatible with Kafka Broker versions 0.10. 0 or higher only. This package offers the Direct Approach only, now making use of the new Kafka consumer API.

What is the primary difference between Kafka streams and Spark streaming?

Kafka analyses the events as they unfold. As a result, it employs a continuous (event-at-a-time) processing model. Spark, on the other hand, uses a micro-batch processing approach, which divides incoming streams into small batches for processing.


1 Answers

Does a corresponding Spark task to a Kafka partition both read and process the data altogether ?

The relationship is very close to what you describe, if by talking about a task we're referring to the part of the graph that reads from kafka up until a shuffle operation. The flow of execution is as follows:

  1. Driver reads offsets from all kafka topics and partitions
  2. Driver assigns each executor a topic and partition to be read and processed.
  3. Unless there is a shuffle boundary operation, it is likely that Spark will optimize the entire execution of the partition on the same executor.

This means that a single executor will read a given TopicPartition and process the entire execution graph on it, unless we need to shuffle. Since a Kafka partition maps to a partition inside the RDD, we get that guarantee.

Structured Streaming takes this even further. In Structured Streaming, there is stickiness between the TopicPartition and the worker/executor. Meaning, if a given worker was assigned a TopicPartition it is likely to continue processing it for the entire lifetime of the application.

like image 53
Yuval Itzchakov Avatar answered Oct 13 '22 00:10

Yuval Itzchakov