I'm unable to run FastText quantization as shown in the documentation. Specifically, as shown at the bottom of the cheat sheet page:
https://fasttext.cc/docs/en/cheatsheet.html
When I attempt to run quantization on my trained model "model.bin":
./fasttext quantize -output model
the following error is printed to the shell:
Empty input or output path.
I've reproduced this problem with builds from the latest code (September 14 2018) and older code (June 21 2018). Since the documented command syntax isn't working, I tried adding an input argument:
./fasttext quantize -input [file] -output model
where [file] is either my training data or trained model. Unfortunately both tries resulted in a segmentation fault with no error message from FastText.
What is the correct command syntax to quantize a FastText model? Also, is it possible to both train and quantize a model in a single run of FastText?
Solution in Python:
# Quantize the model with retraining
model.quantize(input=train_data, qnorm=True, retrain=True, cutoff=200000)
# Save quantized model
model.save_model("model_quantized.bin")
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With