Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

NVIDIA Cuda error "all CUDA-capable devices are busy or unavailable" on OSX

Tags:

Quite often, I get the CUDA library to completely fail and return with an error 46 ("all CUDA-capable devices are busy or unavailable") even for simple calls like cudaMalloc. The code runs successfully if I restart the computer, but this is far from ideal. This problem is apparently quite common.

My setup is the following:

  • OSX 10.6.8
  • NVIDIA CUDA drivers : CUDA Driver Version: 4.0.31 (latest)
  • GPU Driver Version: 1.6.36.10 (256.00.35f11)

I tried many solutions from the Nvidia forum, but it didn't work. I don't want to reboot every time it happens. I also tried to unload and reload the driver with a procedure I assume to be correct (may not be)

kextunload -b com.nvidia.CUDA
kextload -b com.nvidia.CUDA

But still it does not work. How can I kick the GPU (or CUDA) back into sanity ?

This is the device querying result

 CUDA Device Query (Runtime API) version (CUDART static linking)

Found 1 CUDA Capable device(s)

Device 0: "GeForce 9400M"
  CUDA Driver Version / Runtime Version          4.0 / 4.0
  CUDA Capability Major/Minor version number:    1.1
  Total amount of global memory:                 254 MBytes (265945088 bytes)
  ( 2) Multiprocessors x ( 8) CUDA Cores/MP:     16 CUDA Cores
  GPU Clock Speed:                               1.10 GHz
  Memory Clock rate:                             1075.00 Mhz
  Memory Bus Width:                              128-bit
  Max Texture Dimension Size (x,y,z)             1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)
  Max Layered Texture Size (dim) x layers        1D=(8192) x 512, 2D=(8192,8192) x 512
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 8192
  Warp size:                                     32
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             256 bytes
  Concurrent copy and execution:                 No with 0 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Concurrent kernel execution:                   No
  Alignment requirement for Surfaces:            Yes
  Device has ECC support enabled:                No
  Device is using TCC driver mode:               No
  Device supports Unified Addressing (UVA):      No
  Device PCI Bus ID / PCI location ID:           2 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 4.0, CUDA Runtime Version = 4.0, NumDevs = 1, Device = GeForce 9400M
[deviceQuery] test results...
PASSED

This is an example of code that may fail (although in normal conditions it does not)

#include <stdio.h>

__global__ void add(int a, int b, int *c) {
    *c = a + b;
}

int main(void) {
    int c;
    int *dev_c;

    cudaMalloc( (void **) &dev_c, sizeof(int)); // fails here, returning 46

    add<<<1,1>>>(2,7,dev_c);
    cudaMemcpy(&c, dev_c, sizeof(int), cudaMemcpyDeviceToHost);
    printf("hello world, %d\n",c);
    cudaFree( dev_c);
    return 0;
}

I also found out that occasionally I get to revert back to a sane behavior without a reboot. I still don't know what triggers it.

like image 557
Stefano Borini Avatar asked Aug 06 '11 11:08

Stefano Borini


People also ask

What is the status of CUDA devices?

Status: all CUDA-capable devices are busy or unavailable [/b] In other works, tensorflow is able to find GPU, but for some reason it is unavailable.

How many multi-processors does cucuda device [NVS 3100m] have?

CUDA device [NVS 3100M] has 2 Multi-Processors, Compute 1.2 [simpleAtomicIntrinsics]: Using Device 0: “NVS 3100M”

What are the different versions of CUDA?

CUDA Device Query (Runtime API) version (CUDART static linking) CUDA Driver Version: 3.10 CUDA Runtime Version: 3.10 CUDA Capability Major revision number: 1

How much memory does CUDA have?

CUDA Capability Major revision number: 1 CUDA Capability Minor revision number: 2 Total amount of global memory: 267714560 bytes Number of multiprocessors: 2 Number of cores: 16 Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 16384 bytes Total number of registers available per block: 16384 Warp size: 32


1 Answers

I confirm the statement made by the commenters to my post. The GPU may not work if other applications are taking control of it. In my case, the flash player in firefox was apparently occupying all the available resources on the card. I killed the firefox plugin for flash and the card immediately started working again.

like image 193
Stefano Borini Avatar answered Nov 20 '22 03:11

Stefano Borini