Regarding the features on Flink that allow to optimize resource usage in the cluster (+ latency, throughput ...), i.e. slot sharing, task chaining, async i/o and dynamic scaling, I would like to ask the following questions (all in the stream processing context):
In which cases would someone be interested in having the number of slots in a task manager higher than the number of cpu cores?
In which case should we prefer split a pipeline of tasks over multiple slots (disable slot sharing), instead of increasing the parallelism, in order for an application to keep up with the incoming data rates?
Is it possible that even when using all the features above, the resources reserved for a slot may be higher than the amount of resources that all the tasks in the slot require, thus causing us to have resources that are reserved for a slot, but not being used? Is it possible that such problems appear when we have tasks in applications with different latencies (or different parallelisms)? Or even when we are performing multiple aggregations (that cannot be optimised using folds or reduces) on the same window?
Thanks in advance.
To control how many tasks a worker accepts, a worker has so called task slots (at least one). Each task slot represents a fixed subset of resources of the TaskManager. A TaskManager with three slots, for example, will dedicate 1/3 of its managed memory to each slot.
A Flink program consists of multiple tasks (transformations/operators, data sources, and sinks). A task is split into several parallel instances for execution and each parallel instance processes a subset of the task's input data. The number of parallel instances of a task is called its parallelism.
The lazy evaluation lets you construct sophisticated programs that Flink executes as one holistically planned unit.
JobManager. The JobManager has a number of responsibilities related to coordinating the distributed execution of Flink Applications: it decides when to schedule the next task (or set of tasks), reacts to finished tasks or execution failures, coordinates checkpoints, and coordinates recovery on failures, among others.
Usually, it is recommended to reserve for each slot at least one CPU core. One reason why you would want to reserve more slots than cores is that you execute blocking operations in your operators. That way you can keep all of your cores busy.
If you observe that your application cannot keep up with the incoming data rate, then it is usually best to increase the parallelism (given that the bottleneck is not an operator with parallelism 1 and that your data has enough key values).
If you have multiple compute intensive operators in one pipeline (maybe even chained) and you have fewer cores than these operators per slot, then it might make sense to split up the pipeline. That way the computation of these operators can be better done concurrently.
Theoretically, it can be the case that you assign more resources to a slot than are actually needed. E.g. you have a single operator in each slot but multiple cores assigned to it. Also in case of different parallelism of operators, some slots might get more sub-tasks assigned than others. One thing you can always do is to monitor the execution of your job to detect under and over-provisioning.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With