Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Training a sklearn LogisticRegression classifier without all possible labels

I am trying to use scikit-learn 0.12.1 to:

  1. train a LogisticRegression classifier
  2. evaluate the classifer on held out validation data
  3. feed new data to this classifier and retrieve the 5 most probable labels for each observation

Sklearn makes all of this very easy except for one peculiarity. There is no guarantee that every possible label will occur in the data used to fit my classifier. There are hundreds of possible labels and some of them have not occurred in the training data available.

This results in 2 problems:

  1. The label vectorizer doesn't recognize previously unseen labels when they occur in the validation data. This is easily fixed by fitting the labeler to the set of possible labels but it exacerbates problem 2.
  2. The output of the predict_proba method of the LogisticRegression classifier is an [n_samples, n_classes] array, where n_classes consists only of the classes seen in the training data. This means running argsort on the predict_proba array no longer provides values that directly map to the label vectorizer's vocabulary.

My question is, what's the best way to force the classifier to recognize the full set of possible classes, even when some of them don't occur in the training data? Obviously it will have trouble learning about labels it has never seen data for, but 0's are perfectly useable in my situation.

like image 367
Alexander Measure Avatar asked Dec 11 '22 17:12

Alexander Measure


2 Answers

Here's a workaround. Make sure you have a list of all classes called all_classes. Then, if clf is your LogisticRegression classifier,

from itertools import repeat

# determine the classes that were not present in the training set;
# the ones that were are listed in clf.classes_.
classes_not_trained = set(clf.classes_).symmetric_difference(all_classes)

# the order of classes in predict_proba's output matches that in clf.classes_.
prob = clf.predict_proba(test_samples)
for row in prob:
    prob_per_class = (zip(clf.classes_, prob)
                    + zip(classes_not_trained, repeat(0.)))

produces a list of (cls, prob) pairs.

like image 197
Fred Foo Avatar answered Jan 22 '23 17:01

Fred Foo


If what you want is an array like that returned by predict_proba, but with columns corresponding to sorted all_classes, how about:

all_classes = numpy.array(sorted(all_classes))
# Get the probabilities for learnt classes
prob = clf.predict_proba(test_samples)
# Create the result matrix, where all values are initially zero
new_prob = numpy.zeros((prob.shape[0], all_classes.size))
# Set the columns corresponding to clf.classes_
new_prob[:, all_classes.searchsorted(clf.classes_)] = prob
like image 29
joeln Avatar answered Jan 22 '23 18:01

joeln