So, basically I want to build a long-polling application which is using RQ on heroku. I have looked at this question Flask: passing around background worker job (rq, redis) but it doesn't help.
This is basically what I'm doing.
@app.route('/do_something', methods=['POST'])
def get_keywords():
data_json = json.loads(request.data)
text = urllib.unquote(data_json["sentence"])
job = q.enqueue(keyword_extraction.extract, text)
return job.key
@app.route('/do_something/<job_id>', methods=['GET'])
def get_keywords_results(job_id):
job = Job().fetch(job_id)
if(not job.is_finished):
return "Not yet", 202
else:
return str(job.result)
Nothing is fancy, so when the POST request comes, it will queue the job and return job_id back to user immidiately, and then user will use the key to keep polling the result. However, I can't seem to get this to work as this line Job().fetch(job_id)
returns
NoRedisConnectionException: Could not resolve a Redis connection.
Any help would be really appreciated.
Django-RQ allows you to easily put jobs into any of the queues defined in settings.py . enqueue() returns a job object that provides a variety of information about the job's status, parameters, etc. equeue() takes the function to be enqueued as the first parameter, then a list of arguments.
RQ, also known as Redis Queue, is a Python library that allows developers to enqueue jobs to be processed in the background with workers. The RQ workers will be called when it's time to execute the queue in the background.
A worker is a Python process that typically runs in the background and exists solely as a work horse to perform lengthy or blocking tasks that you don't want to perform inside web processes.
To clear data of a DCS Redis 4.0 or 5.0 instance, you can run the FLUSHDB or FLUSHALL command in redis-cli, use the data clearing function on the DCS console, or run the FLUSHDB command on Web CLI. To clear data of a Redis Cluster instance, run the FLUSHDB or FLUSHALL command on every shard of the instance.
I found this out already, in case anybody is interested. It has to be this one instead.
Job.fetch(job_id, connection=conn)
In RQ version 0.13.0
I found when running:
j = q.enqueue(job_func)
j.key
will be the the key preceded by rq:job:
.
Therefor elsewhere in the framework when fetching the job I need to use:
j = q.fetch_job(key[7:])
Where j.result
will be None
or the return value of job_func
.
Not sure if there's a better way to handle this...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With