Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Confusion about CUDA unified virtual memory

Tags:

c++

c

cuda

I have some confusion about unified virtual memory.

The documentation behind the link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#unified-virtual-address-space) says it can be used when...

When the application is run as a 64-bit process, a single address space is used for the host and all the devices of compute capability 2.0 and higher.

But this link (http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements) says it needs:

a GPU with SM architecture 3.0 or higher (Kepler class or newer)

Furthermore, the first link says that I can use cudaHostAlloc. The second one then uses cudaMallocManaged.

Are there 2 different things between this 'Unified' term or is the documentation just a bit incoherent?

like image 996
Michael Avatar asked Oct 24 '25 13:10

Michael


1 Answers

You are refering to Unified Virtual Address Space which is not the same as Unified memory which was introduced since CUDA 6.0 and for architecture 3.0 or higher and eliminates the need for explicit data transfer from host to device

unified memory

unified memory2

You can check also:

here , and here

like image 146
George Avatar answered Oct 27 '25 02:10

George