I have a worker on AWS that handles queued Laravel notifications. Some of the notifications get send out, but others get stuck in the queue and I don't know why.
I've looked at the logs in Beanstalk and see three different types of error:
2020/11/03 09:22:34 [emerg] 10932#0: *30 malloc(4096) failed (12: Cannot allocate memory) while reading upstream, client: 127.0.0.1, server: , request: "POST /worker/queue HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "localhost"
I see an Out of Memory issue on Bugsnag too, but without any stacktrace.
Another error is this one:
2020/11/02 14:50:07 [error] 10241#0: *2623 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /worker/queue HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock", host: "localhost"
And this is the last one:
2020/11/02 15:00:24 [error] 10241#0: *2698 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /worker/queue HTTP/1.1", upstream: "fastcgi://unix:/run/php-fpm/www.sock:", host: "localhost"
I don't really understand what I can do to resolve these errors. It's just a basic Laravel / EBS / SQS setup, and the only thing the queue has to do is handle notifications. Sometimes a couple of dozens at a time. I'm running a t2.micro
, and would assume that's enough to send a few e-mails? I've upped the environment to a t2.large
but to no avail.
I notice that messages end up in the queue, then get the status 'Messages in flight', but then run into all sorts of troubles on the Laravel side. But I don't get any useful errors to work with.
All implementation code seems to be fine, because the first few notifications go out as expected and if I don't queue at all, all notifications get dispatched right away.
The queued notifications eventually generate two different exceptions: MaxAttemptsExceededException
and an Out of Memory FatalError
, but neither leads me to the actual underlying problem.
Where do I look further to debug?
UPDATE
See my answer for the problem and the solution. The database transaction hadn't finished before the worker tried to send a notification for the object that still had to be created.
What is the current memory_limit assigned to PHP? You can determine this by running this command:
php -i | grep memory_limit
You can increase this by running something like:
sed -i -e 's/memory_limit = [current-limit]/memory_limit = [new-limit]/g' [full-path-to-php-ini]
Just replace the [current-limit] with the value displayed in the first command, and [new-limit] with a new reasonable value. This might require trial and error. Replace [full-path-to-php-ini] with the full path to the php.ini that's used by the process that's failing. To find this, run:
php -i | grep php.ini
First make sure that you increased the max_execution_time
and also memory_limit
Also make sure that you set --timeout option
Then make sure you follow the instruction for Amazon SQS as in laravel doc says
The only queue connection which does not contain a retry_after value is Amazon SQS. SQS will retry the job based on the Default Visibility Timeout which is managed within the AWS console.
Job Expirations & Timeouts
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With