i'm solving the multi-classification problem using xgboost.
But, Warnings occured when fitting xgboost model.
My code is as follows. I'm using xgboost 1.4.0
start = time.time()
xgb_model = xgboost.XGBClassifier(tree_method='gpu_hist', eta = 0.2, nrounds= 1000,
colsample_bytree=0.5,
metric='multi:softmax')
hr_pred = xgb_model.fit(x_train, np.ravel(y_train, order='C')).predict(x_test)
print(classification_report(y_test, hr_pred))
print(time.time()-start)
result comes out well. But this Warnings pops up.
Parameters: { "metric", "nrounds" } might not be used.
This may not be accurate due to some parameters are only used in language bindings but
passed down to XGBoost core. Or some parameters are not used but slip through this
verification. Please open an issue if you find above cases.
UserWarning: Use subset (sliced data) of np.ndarray is not recommended because it will generate extra copies and increase memory consumption
"because it will generate extra copies and increase " +
Personally I find the solution of downgrading rather risky.
Instead, you can suppress this particular warning quite easily with warnings.filterwarnings(action='ignore', category=UserWarning)
.
Unfortunately, according to the devs, this is expected behaviour. https://github.com/dmlc/xgboost/issues/6908
I realize this is a temporary fix, but pip uninstall xgboost
then pip install xgboost==1.3.3
worked for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With