Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Flink dynamic scaling

I am currently studying scalability on Flink. Starting from Version 1.2.0, dynamic rescaling was introduced. I am looking at scaling a long running job which reads data from Kafka source.

Questions regarding dynamic rescaling.

  1. To scale out my flink application, for example: add new task managers, must I restart the job / yarn session to use the newly added resource?
  2. I think it's possible to write Yarn client to deploy new task managers and make it talk to job manager, is that already available in existing flink yarn client application?

Pardon me if these questions are too basic, I did go through the documents and I have to admit I have not been able to put the concepts altogether with some test deployments on yarn recently.

like image 412
seriousmegalor Avatar asked Apr 18 '17 07:04

seriousmegalor


1 Answers

Currently, Dynamic Scaling means the capability to update the operators' parallelism(Flink 1.2), either for keyed state or for non-keyed state.

  1. To scale out my flink application, for example: add new task managers, must I restart the job / yarn session to use the newly added resource? - Yes, the job has to be stopped first, update the parallelism, and restart it again. Do not have to worry about the state, Flink will handle them, including repartition.

  2. I think it's possible to write Yarn client to deploy new task managers and make it talk to job manager, is that already available in existing flink yarn client application? - No, you can not. This feature seems to be added in the future. Currently, we can not do that.

like image 189
BrightFlow Avatar answered Oct 10 '22 13:10

BrightFlow