It appears although XGB is compiled to run on GPU, when called/executed via Scikit learn API, it doesn't seem to be running on GPU.
Please advise if this is expected behaviour
Enabling training of an xgboost model using the GPU is straightforward— just set the hyperparameter tree_method to“gpu_hist”.
Most of the objective functions implemented in XGBoost can be run on GPU. Following table shows current support status. Objective will run on GPU if GPU updater ( gpu_hist ), otherwise they will run on CPU by default.
scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don't play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms. NVIDIA have released their own version of sklearn with GPU support.
XGBoost on GPU That's 4.4 times faster than the CPU.
As far as I can tell, the Scikit learn API does not currently support GPU. You need to use the learning API (e.g. xgboost.train(...)). This also requires you to first convert your data into xgboost DMatrix.
Example:
params = {"updater":"grow_gpu"}
train = xgboost.DMatrix(x_train, label=y_train)
clf = xgboost.train(params, train, num_boost_round=10)
UPDATE:
The Scikit Learn API now supports GPU via the **kwargs argument: http://xgboost.readthedocs.io/en/latest/python/python_api.html#id1
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With