I have a Django app on Heroku that calls a task using Celery's delay method, which should pass off extra processing to a worker. But when I make an http request to the corresponding view, the Heroku web dyno hangs and eventually causes the request to time out. Here's a test task (the app is called waittimes):
@task
def test_tasks(message, name='waittimes.tasks.test_tasks'):
print message
And the test view:
class TaskTest(View):
def get(self, request):
print "about to call the task"
test_tasks.delay("the task was successful!")
return HttpResponse("view was successful")
If I make an http request to this view, I expect "the task was successful" to get outputted to the console and a response saying "view was successful". That occurs successfully when I make a request to the development server on my computer. It also works if I fire up a django shell in my app's Heroku environment and use django's test client to make the request.
app[celeryd.1]: [2013-06-26 23:57:48,018: INFO/MainProcess] Got task from broker: waittimes.tasks.test_tasks[67036069-b49e-45ba-aef4-3c64d7161a67]
app[celeryd.1]: [2013-06-26 23:57:48,133: WARNING/PoolWorker-3] the task was successful!
app[celeryd.1]: [2013-06-26 23:57:48,200: INFO/MainProcess] Task waittimes.tasks.test_tasks[67036069-b49e-45ba-aef4-3c64d7161a67] succeeded in 0.09690284729s: None
But when I make a request directly to the Heroku url, the request hangs and I eventually get a dreaded H12 timeout error from Heroku.
heroku[router]: at=error code=H12 desc="Request timeout" method=GET path=/task/test/ dyno=web.1 connect=2ms service=30000ms status=503 bytes=0
I know calling the task is causing the problem because "about to call the task" does get printed in the console. So the problem is that the system has trouble resolving the "delay" (and apply_async) method. It just hangs and doesn't return an async object. And this only happens when the code is run on a web dyno process.
So far these are my conclusions:
1) The task is properly registered and my Redis broker is working, because everything works when I call the view using the test client from the shell (however this is run on a separate shell process on Heroku, not the web dyno that normally receives requests)
2) The system correctly routes and dispatches the handler for the request since "about to call the task" gets printed. It doesn't appear to be a problem with the Heroku router.
3) The problem is not related to a specific view because even a stripped down test case like this doesn't work
Aside from a straight solution, any advice on how to debug further is also appreciated.
Alright this may not be a direct answer but given the age of this question and how long it has gone on unattended I will go ahead and provide my insight for anyone else who is unfortunate to run across this issue as I did.
This particular problem seems poorly documented and is rather hard to search for, and I just ran into it while setting up a side project on Heroku on a lark.
There seems to be something endemic to Heroku where certain Python function calls behave differently on the platform as opposed to locally (or on any normal Python deployment).
In my case, the issue here was that my Celery task was making a call to Python's time.sleep()
function.
As a test case, I was using time.sleep(1)
simply to demonstrate in the logs that the task was indeed being performed asynchronously. I had successfully run this test on normal infrastructure (including virtual machines) many times.
When I ported this test to Heroku, I experienced the exact same problem gentro did. The logs clearly indicated that Celery and my broker has initialized normally, and were aware of my application, however, when I went to make the call via a Django view, my web dyno would mysteriously timeout with the H12 as the only log message.
When I commented out the sleep
call, everything worked completely normally.
TL;DR - Check your call stack leading to your celery task, make sure you aren't leaving in any functions that might cause the Heroku dyno to seize up, such as sleep()
I'm not saying that this specifically is what caused the original asker's problem, but if you're seeing this behavior, this is absolutely one of the potential causes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With