Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to efficiently serialize a scikit-learn classifier

What's the most efficient way to serialize a scikit-learn classifier?

I'm currently using Python's standard Pickle module to serialize a text classifier, but this results in a monstrously large pickle. The serialized object can be 100MB or more, which seems excessive and takes a while to generate and store. I've done similar work with Weka, and the equivalent serialized classifier is usually just a couple of MBs.

Is scikit-learn possibly caching the training data, or other extraneous info, in the pickle? If so, how can I speed up and reduce the size of serialized scikit-learn classifiers?

classifier = Pipeline([
    ('vectorizer', CountVectorizer(ngram_range=(1,4))),
    ('tfidf', TfidfTransformer()),
    ('clf', OneVsRestClassifier(LinearSVC())),
])
like image 576
Cerin Avatar asked Jul 11 '13 02:07

Cerin


2 Answers

For large text datasets, use the hashing trick: replace the TfidfVectorizer by a HashingVectorizer (potentially stacked with a TfidfTransformer in the pipeline): it will be much faster to pickle as you won't have to store the vocabulary dict any more as discussed recently in this question:

How can i reduce memory usage of Scikit-Learn Vectorizers?

like image 50
ogrisel Avatar answered Sep 28 '22 02:09

ogrisel


You can also use joblib.dump and pass in a compression. I noticed my classifier pickle dumps reducing by a factor of ~16 using option compress=3.

like image 35
Shayan Masood Avatar answered Sep 28 '22 01:09

Shayan Masood