I have a job like this:
//Run very intensive script that generates files
//Notify the user that the job is done
I know that the script takes 4-5 minutes to run since it is the time needed to generate all the files. However, after exactly 60 seconds, the job is removed (i.e. I do not see it in my jobs
database table) and the user get notified. Then, every 60 seconds, until the script is done, the user is notified that the job is done.
The job do not fail. The job is only present in the jobs
table for the first 60 seconds. The file-generating script runs only once.
I use supervisor:
[program:queue]
process_name=%(program_name)s_%(process_num)02d
command=php artisan queue:work --timeout=600 --queue=high,low
user=forge
numprocs=8
directory=/home/forge/default
stdout_logfile=/home/forge/default/storage/logs/supervisor.log
redirect_stderr=true
Here's my database config:
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'low',
'expire' => 600,
],
The behaviour is the same if I use redis
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'low',
'expire' => 600,
],
Your configuration is slightly off. I'm not sure where expire
came from, but I believe you meant it to be retry_after
. Since your configuration does not define a retry_after
after key, Laravel defaults the value to 60 seconds. So, your queue is killing the job after it runs for 60 seconds and re-queues it to try again.
Additionally, the following note is from the documentation:
The
--timeout
value should always be at least several seconds shorter than yourretry_after
configuration value. This will ensure that a worker processing a given job is always killed before the job is retried. If your--timeout
option is longer than yourretry_after
configuration value, your jobs may be processed twice.
So, if your queue work timeout is going to be 600, I'd suggest setting your retry_after to at least 610.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With