I'm using a separated execution context to do some blocking actions in background (calling a blocking API to call an external Flume server).
flume{
context = {
fork-join-executor {
parallelism-min = 300
parallelism-max = 300
}
}
}
My problem is that sometimes the flume server can crash, the number of waiting tasks in the Akka queue can grow and cause memory issues. Is there a way to limit the queue for this exeecution context?
Maybe something like this ? :
mailbox-capacity = 1000
Thanks
A soultion is to replace the fork-join-executor context by a thread-pool-executor :
flume{
context = {
thread-pool-executor {
core-pool-size-min = 300
core-pool-size-min = 300
max-pool-size-min = 300
max-pool-size-max = 300
task-queue-size = 1000
task-queue-type = "linked"
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With