I have an issue running the evaluate!() command from MLJ. I am using the Multinomial Naive Bayes Classifier to classify sentiment in a bunch of tweets.
When I run for single case like this it runs all fine,
using MLJ
X = coerce(wordCountVec,Count)
y = coerce(data.sentiment_labels, Multiclass)
train_idx,test_idx = partition(eachindex(y), 0.7,shuffle = true)
nb_m = @load MultinomialNBClassifier pkg = "NaiveBayes"
mach = machine(nb_m,X,y)
MLJ.fit!(mach, rows = train_idx)
yhat = MLJ.predict_mode(mach, rows = test_idx)
micro_f1score(yhat,y[test_idx])
#out > 0.68
But however when I try to use evaluate!() to use cross validation
evaluate!(
mach,
resampling = CV(nfolds = 3),
measure = micro_f1score
)
I get the following error
ArgumentError:
[34mMultinomialNBClassifier @358[39m <: Probabilistic but prediction_type([34mMulticlassFScore{Float64,…} @139[39m) = :deterministic.
To override measure checks, set check_measure=false.
I am going to take a guess here and say it might be because when I predict in the first case (not using evaluate!()) I use predict_mode rather than predict, so I end up with deterministic values rather than probabilities. Any ideas what I can do it fix this when using the evaluate() function or is this some other error?
You can specify the operation argument to evaluate!, like this:
evaluate!(
mach,
resampling = CV(nfolds = 3),
measure = micro_f1score,
operation = predict_mode
)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With