Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Persisting data in sklearn

I'm using scikit-learn to cluster text documents. I'm using the classes CountVectorizer, TfidfTransformer and MiniBatchKMeans to help me do that. New text documents are added to the system all the time, which means that I need to use the classes above to transform the text and predict a cluster. My question is: how should I store the data on disk? Should I simply pickle the vectorizer, transformer and kmeans objects? Should I just save the data? If so, how do I add it back to the vectorizer, transformer and kmeans objects?

Any help would be greatly appreciated

like image 413
pnsilva Avatar asked Jun 21 '12 15:06

pnsilva


People also ask

What is model persistence?

Technical definition: Model persistence is the ability to save and load the machine learning model. It is desirable to have a way to persist the model for future use without having to retrain. Pickle and Joblib are the terms you will hear quite often during model persistence.

Which is better pickle or Joblib?

TLDR: joblib is faster in saving/loading large NumPy arrays, whereas pickle is faster with large collections of Python objects. Therefore, if your model contains large NumPy arrays (as the majority of models does), joblib should be faster.

What is Joblib Sklearn?

Sklearn Joblib: Run RF on the Cluster This can be especially useful if you're running heavy grid search cross-validation or other forms of hyperparameter tuning. You can use the Dask backend to joblib to delegate the distributed training of your model to a Dask cluster of virtual machines in the cloud.


1 Answers

It depends on what you want to do.

If you want to find some fixed cluster centers on a training set and then re-use them later to compute cluster assignments for new data then pickling the models (or just saving the vocabulary of the vectorizer and the other models constructors parameters and the cluster center positions) is ok.

If what you want is doing clustering with new data, you might want to retrain the whole pipeline using the union of the new data + the old data to make it possible for the vocabulary of the vectorizer to build new features (dimensions) for the new words and let the clustering algorithm find cluster centers that better match the structure of the complete dataset.

Note that in the future we will provide hashing vectorizers (see for instance this pull request on hashing transformers as a first building block), hence storing the vocabulary won't be necessary any more (but you will loose the ability to introspect the "meaning" of the feature dimensions).

As for pickling the models vs using your own representation for their parameters I have answered this part in your previous question here: Persist Tf-Idf data

like image 70
ogrisel Avatar answered Oct 20 '22 01:10

ogrisel