The error message is as follows:
RuntimeError Traceback (most recent call last)
<ipython-input-24-06e96beb03a5> in <module>()
11
12 x_test = np.array(test_features)
---> 13 x_test_cuda = torch.tensor(x_test, dtype=torch.float).cuda()
14 test = torch.utils.data.TensorDataset(x_test_cuda)
15 test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py in _lazy_init()
160 class CudaError(RuntimeError):
161 def __init__(self, code):
--> 162 msg = cudart().cudaGetErrorString(code).decode('utf-8')
163 super(CudaError, self).__init__('{0} ({1})'.format(msg, code))
164
RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:51
PyTorch and Google Colab are Powerful for Developing Neural Networks. PyTorch was developed by Facebook and has become famous among the Deep Learning Research Community. It allows for parallel processing and has an easily readable syntax that caused an uptick in adoption.
You can enable GPU by clicking on "Change Runtime Type" under the "Runtime" menu. There is also "TPU" support available in these days.
You can check the imported version of PyTorch using print(torch. __version__) , if that's what you are looking for.
Click on Runtime and select Change runtime type.
Now in Hardware Acceleration, select GPU and hit Save.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With