Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Airflow LocalExecutor high memory usage for running tasks in parallel: expected or fixable?

Tags:

airflow

Situation: Airflow 1.10.3 running on a Kubernetes pod, LocalExecutor, parallelism=25 Every night our DAGs will start their scheduled run, which means lots of tasks will be running in parallel. Each task is either a KubernetesPodOperator starting the actual work on another pod or an ExternalTaskSensor that waits for another task to be completed (in the ETL DAG).

Problem: Each task that starts will create 2 more local processes (besides the worker process) that take up 70MB each. But all those processes do is wait, either for another pod (KubernetesPodOperator) to finish or for another task to finish (ExternalTaskSensor). This is a huge memory overhead that seems excessive. We picked this setup explicitly to put the resource load elsewhere (Kubernetes) and use Airflow lightweight: just for scheduling other pods. Our future growth will mean we'd like to scale up to dozens or even hundreds of parallel tasks on the Airflow pod, but that is not very feasible with these memory requirements.

Question: What can we do about this? Are there settings to lessen the memory overhead per parallel task? Maybe run the Operator inside the worker process? Any advice is welcome, thanks!
(Maybe the answer is: that's just the way Airflow works, in that case: any alternatives for a more lightweight scheduling solution?)

What we've tried:
- Use the sensor 'reschedule' mode instead of 'poke' to not have sensors take up resources while waiting. Did result in tasks stuck in up_for_reschedule.
- Play with parallelism settings, but in the end we'll need a lot of processes, so this value needs to be very high.

P.S. This is my first question on SO, so improvements / request for additional information is welcome, thanks!

Update
I understand that LocalExecutor does not work well in pro like this. And if you have resource heavy tasks, like Airflow operators mostly are, it makes sense to switch to a distributed setup. But I keep thinking our setup has it's charm as well as a pure workflow setup: just 1 Airflow pod which only schedules other pods and waits for them to finish. With a JVM setup that would mean a lot of threads being mostly idle, waiting for IO. And the overhead of a JVM thread would be about 1 MB per thread, where with Airflow we have to deal with 140MB per task! I might try to create a LocalThreadedExecutor of so, that does not start extra processes...

like image 691
Erik Mulder Avatar asked Oct 15 '22 05:10

Erik Mulder


1 Answers

That is the inherent problem with the LocalExecutor. It is based on forking processes. Even if the tasks are just triggers to start another pod, for each task Airflow will still schedule a process, which of course has a high overhead.

My suggestion would be to move to the Kubernetes executor https://airflow.apache.org/docs/1.10.1/kubernetes.html. Then each task will automatically run as a Pod. You then no longer have the need to explicitly use the KubernetesPodOperator and can just use regular Airflow operators as they will be executed as pods in kubernetes anyway. In the end if this is a feasible approach I think it will lead to the best results in the long run.

like image 133
Blokje5 Avatar answered Oct 21 '22 03:10

Blokje5