What's the most efficient way to serialize a scikit-learn classifier?
I'm currently using Python's standard Pickle module to serialize a text classifier, but this results in a monstrously large pickle. The serialized object can be 100MB or more, which seems excessive and takes a while to generate and store. I've done similar work with Weka, and the equivalent serialized classifier is usually just a couple of MBs.
Is scikit-learn possibly caching the training data, or other extraneous info, in the pickle? If so, how can I speed up and reduce the size of serialized scikit-learn classifiers?
classifier = Pipeline([
('vectorizer', CountVectorizer(ngram_range=(1,4))),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC())),
])
For large text datasets, use the hashing trick: replace the TfidfVectorizer
by a HashingVectorizer
(potentially stacked with a TfidfTransformer
in the pipeline): it will be much faster to pickle as you won't have to store the vocabulary dict any more as discussed recently in this question:
How can i reduce memory usage of Scikit-Learn Vectorizers?
You can also use joblib.dump and pass in a compression. I noticed my classifier pickle dumps reducing by a factor of ~16 using option compress=3.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With