According to the CUDA TOOLKIT DOCUMENTATION:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/
Device memory can be allocated either as linear memory or as CUDA arrays.
Does this mean that the CUDA arrays are not stored linearly in GPU memory?
In my experiment, I successfully dumped my data from GPU memory based on the cudamemcpy function. If my data is allocated by cudaMallocArray, does it mean that the data are not physically linear in GPU memory and need to be extracted by other API?
CUDA arrays are indeed stored in GPU device memory ("global" memory), and the bytes are not physically linear in memory. They are an opaque layout optimized for multichannel, multidimensional texture access and texture filtering. The format is undocumented, since it may change between GPU architectures.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With