Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Django: Machine learning model in server side?

I have a Word2Vec model(One of Machine learning model) and could get this pre-trained model by filename :

model = Word2Vec.load(fname)

So, I can get some prediction by using this model:

prediction = model.predict(X)

What I'm trying to do is to get request(including query word) from user and query this data to my pre-trained model and get prediction so that server could send response with this prediction data. This process should occur every time user send queries, so this pre-trained model should always be in memory.

To implement this, I think I have to use Redis, Celery kinda thing, but as I know of, Celery is working asynchronously with Django web application, so it would not be suitable for what I want to do...

How can I implement this function in my Django application?

Thanks.

like image 505
user3595632 Avatar asked Jan 04 '23 06:01

user3595632


1 Answers

You don't actually need Redis or celery for this.

Before I post the solution using Django, I should mention that if you only need a web interface for your ML project, that is, you don't need Django's fancy ORM, admin, etc, you should go with Flask. It's perfect for your use case.


Solution using Flask:

It's very easy to store your trained model in memory using Flask:

# ...
# your Flask application code
# ...
# ...

if __name__ == '__main__':
    model = Word2Vec.load(fname)
    app.run()

If you're interested, the complete example is here.


Solution using Django:

You can utilise Django's cache framework to store your model. First, activate the local memory cache backend. Instructions are here.

Now, you'll need to store your model in the cache.

from django.core.cache import cache

model_cache_key = 'model_cache' 
# this key is used to `set` and `get` 
# your trained model from the cache

model = cache.get(model_cache_key) # get model from cache

if model is None:
    # your model isn't in the cache
    # so `set` it
    model = Word2Vec.load(fname) # load model
    cache.set(model_cache_key, model, None) # save in the cache
    # in above line, None is the timeout parameter. It means cache forever

# now predict
prediction = model.predict(...)

You can keep the above code in your views, but I'd rather you create a separate file for that and then import this file in you views.

You can find the full example is on this blog.

like image 72
xyres Avatar answered Jan 08 '23 12:01

xyres