my computer has only 1 GPU.
Below is what I get the result by entering someone's code
[name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456
locality {} incarnation: 16894043898758027805, name: "/device:GPU:0"
device_type: "GPU" memory_limit: 10088284160
locality {bus_id: 1 links {}}
incarnation: 17925533084010082620
physical_device_desc: "device: 0, name: GeForce RTX 3060, pci bus id: 0000:17:00.0, compute
capability: 8.6"]
I use jupyter notebook and I run 2 kernels now. (TensorFlow 2.6.0 and also installed CUDA and cuDNN as TensorFlow guide)
The first kernel is no problem to run my Sequential model from Keras.
But when I learn the same code in the second kernel, I got the error as below.
Attempting to perform BLAS operation using StreamExecutor without BLAS support [[node sequential_3/dense_21/MatMul (defined at \AppData\Local\Temp/ipykernel_14764/3692363323.py:1) ]] [Op:__inference_train_function_7682]
Function call stack: train_function
how can I learn multiple kernels without any problem and share them with only 1 GPU?
I am not familiar with TensorFlow 1.x.x version though.
I just solved this problem as below. This problem is because when keras run with gpu. It uses almost all vram. So i needed to give memory_limit for each notebook. Here is my code how i could solve it. You can just change memory_limit value.
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
tf.config.experimental.set_virtual_device_configuration(
gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5120)])
except RuntimeError as e:
print(e)
I had this error when trying to run a python script when a Jupyter notebook was open. Killing the notebook kernel before running the script did the trick. It seems that only one program can use the GPU in the same time.
For the benefit of community providing solution here
This problem is because when keras run with gpu, it uses almost all
vram. So we needed to givememory_limitfor each notebook as shown belowgpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_virtual_device_configuration( gpus[0],[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5120)]) except RuntimeError as e: print(e)(Paraphrased from MCPMH)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With