Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculate Confusion Matrix of a FastText Classifier model

I'm calculating for a Facebook FastText classifier model the confusion matrix in this way:

#!/usr/local/bin/python3

import argparse
import numpy as np
from sklearn.metrics import confusion_matrix


def parse_labels(path):
    with open(path, 'r') as f:
        return np.array(list(map(lambda x: int(x[9:]), f.read().split())))


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Display confusion matrix.')
    parser.add_argument('test', help='Path to test labels')
    parser.add_argument('predict', help='Path to predictions')
    args = parser.parse_args()
    test_labels = parse_labels(args.test)
    pred_labels = parse_labels(args.predict)

    print(test_labels)
    print(pred_labels)

    eq = test_labels == pred_labels
    print("Accuracy: " + str(eq.sum() / len(test_labels)))
    print(confusion_matrix(test_labels, pred_labels))

My predictions and test set are like

$ head -n10 /root/pexp 
__label__spam
__label__verified
__label__verified
__label__spam
__label__verified
__label__verified
__label__verified
__label__verified
__label__verified
__label__verified

$ head -n10 /root/dataset_test.csv 
__label__spam
__label__verified
__label__verified
__label__spam
__label__verified
__label__verified
__label__verified
__label__verified
__label__verified
__label__verified

Predictions of the model has been calculated over the test set in this way:

./fasttext predict /root/my_model.bin /root/dataset_test.csv > /root/pexp

I'm then going the calculate the FastText Confusion Matrix:

$ ./confusion.py /root/dataset_test.csv /root/pexp

but I'm stuck with this error:

Traceback (most recent call last):
  File "./confusion.py", line 18, in <module>
    test_labels = parse_labels(args.test)
  File "./confusion.py", line 10, in parse_labels
    return np.array(list(map(lambda x: int(x[9:]), f.read().split())))
  File "./confusion.py", line 10, in <lambda>
    return np.array(list(map(lambda x: int(x[9:]), f.read().split())))
ValueError: invalid literal for int() with base 10: 'spam'

I have fixed the script as suggested to handle non numeric labels:

def parse_labels(path):
    with open(path, 'r') as f:
        return np.array(list(map(lambda x: x[9:], f.read().split())))

Also, in the case of FastText it's possibile that the test set will have normalized labels (without the prefix __label__) at some point, so to convert back to the prefix you can do like:

awk 'BEGIN{FS=OFS="\t"}{ $1 = "__label__" tolower($1) }1' /root/dataset_test.csv  > /root/dataset_test_norm.csv 

See here about this.

Also, the input test file must be cut of the other columns than the label column:

cut -f 1 -d$'\t' /root/dataset_test_norm.csv > /root/dataset_test_norm_label.csv

So finally we get the Confusion Matrix:

$ ./confusion.py /root/dataset_test_norm_label.csv /root/pexp
Accuracy: 0.998852852227
[[9432    21]
 [    3 14543]]

My final solution is here.

[UPDATE]

The script is now working fine. I have added the Confusion Matrix calculation script directly in my FastText Node.js implementation, FastText.js here.

like image 766
loretoparisi Avatar asked Oct 30 '25 21:10

loretoparisi


1 Answers

from sklearn.metrics import confusion_matrix

# predict the data
df["predicted"] = df["text"].apply(lambda x: model.predict(x)[0][0])

# Create the confusion matrix
confusion_matrix(df["labeled"], df["predicted"])


## OutPut:
# array([[5823,    8,  155,    1],
#        [ 199,   51,   22,    0],
#        [ 561,    2,  764,    0],
#        [  48,    0,    4,    4]], dtype=int64)
like image 145
Ramkrishan Sahu Avatar answered Nov 02 '25 06:11

Ramkrishan Sahu



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!