I have a job that works flawless locally, but in production I run into issues where it doesn't work. I've encompassed the entire handle()
with a try/catch
and am not seeing anything logged to Bugsnag, despite many other exceptions elsewhere from being deployed.
public function handle() {
try {
// do stuff
} catch (\Exception $e) {
Bugsnag::notifyException($e);
throw $e;
}
}
According to Laravel Horizon this queue job runs for 0.0026001930236816406
seconds and I never see it work and never see any other errors in the failed_jobs
table as it relates to this job.
config/queue.php
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => (60 * 10), // 10 minutes
'block_for' => null,
],
config/horizon.php
'environments' => [
'production' => [
'supervisor' => [
'connection' => 'redis',
'queue' => [
'default',
],
'balance' => 'auto',
'processes' => 10,
'tries' => 3,
// 10 seconds under the queue's retry_after to avoid overlap
'timeout' => (60 * 10) - 10, // Just under 10 mins
],
If something is causing this job to retry over and over, how can I find out how? I'm at a loss.
Investigation thus far
SELECT DISTINCT exception, COUNT(id) as errors
FROM failed_jobs
WHERE payload LIKE '%[TAG-JOB-HAS]%'
GROUP BY exception;
To see more than this error message:
Job has been attempted too many times or run too long
but that's all I see.
Try to catch the exception in the failed method given by laravel
/**
* The job failed to process.
*
* @param Exception $exception
* @return void
*/
public function failed(Exception $exception)
{
// Send user notification of failure, etc...
}
and check whether your default queue driver in local is sync then its expected behavior.
I had the same problem
I fixed it by increasing the 'retry_after' parameter
make sure the retry_after value is greater than the time it takes a job to run
in config/queue.php file
'connections' => [
'sync' => [
'driver' => 'sync',
],
'database' => [
'driver' => 'database',
'table' => 'jobs',
'queue' => 'default',
'retry_after' => 9000,
],
According to documentation, you can handle job failing in two common ways:
failed()
method.In the first case, you can handle all jobs using Queue::failing()
method. You'll receive Illuminate\Queue\Events\JobFailed
event as a parameter, and it contains exception.
In another case, you can use failed()
method, it should be placed near your handle()
method. You can receive Exception $exception
as a parameter too.
Example:
public function failed(\Throwable $exception)
{
// Log failure
}
Hope this helps.
If you've seen this MaxAttemptsExceededException
in your error logs or failed_jobs
table and you don't have a clue what happened to the job, let me try to explain what may have happened. It's either:
The job timed out and it can't be attempted again.
The job was released back to the queue and it can't be attempted again.
If your job processing time exceeded the timeout configuration, the worker will check the maximum attempts allowed and the expiration date for the job and decide if it can be attempted again. If that's not possible, the worker will just mark the job as failed and throw that MaxAttemptsExceededException
.
Also if the job was released back to the queue and a worker picks it up, it'll first check if the maximum attempts allowed was exceeded or the job has expired and throw MaxAttemptsExceededException
in that case.
https://divinglaravel.com/job-has-been-attempted-too-many-times-or-run-too-long
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With