I'm trying to implement a custom scoring function for RidgeClassifierCV in scikit-learn. This involves passing a custom scoring function as the score_func
when initializing the RidgeClassifierCV
object. I expected the score_func to take in categorical values as input for y_true
and y_pred
. Instead, however, floating point values are passed in as y_true
and y_pred
. The size of the y vectors is equal to the number of classes times the number of training examples, rather than simply having a y vector with length equivalent to the number of training examples.
Can I somehow force categorical predictions to be passed into the custom scoring function, or do I have to deal with the raw weights? If I do have to deal directly with the raw weights, is the index of the maximum value in a slice of the vector of outputs equivalent to the predicted class?
Ridge classifier. Ridge regression with built-in cross validation. For multi-class classification, n_class classifiers are trained in a one-versus-all approach. Concretely, this is implemented by taking advantage of the multi-variate response support in Ridge.
Yes, ridge regression can be used as a classifier, just code the response labels as -1 and +1 and fit the regression model as normal.
RidgeCV is cross validation method in ridge regression. Ridge Regression is a special case of regression which is normally used in datasets which have multicollinearity. source: Scikit Learn.
Logistic Regression CV (aka logit, MaxEnt) classifier. See glossary entry for cross-validation estimator. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation.
This is a bug that has been fixed.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With