CUDA beginner here.
In my code i am currently launching kernels a lot of times in a loop in the host code. (Because i need synchronization between blocks). So i wondered if i might be able to optimize the kernel launch.
My kernel launches look something like this:
MyKernel<<<blocks,threadsperblock>>>(double_ptr, double_ptr, int N, double x);
So to launch a kernel some signal obviously has to go from the CPU to the GPU, but i'm wondering if the passing of arguments make this process noticeably slower.
The arguments to the kernel are the same every single time, so perhaps i could save time by copying them once, access them in the kernel by a name defined by
__device__ int N;
<and somehow (how?) copy the value to this name N on the GPU once>
and simply launch the kernel with no arguments as such
MyKernel<<<blocks,threadsperblock>>>();
Will this make my program any faster? What is the best way of doing this? AFAIK the arguments are stored in some constant global memory. How can i make sure that the manually transferred values are stored in a memory which is as fast or faster?
Thanks in advance for any help.
I would expect the benefits of such an optimization to be rather small. On sane platforms (ie. anything other than WDDM), kernel launch overhead is only of the order of 10-20 microseconds, so there probably isn't a lot of scope to improve.
Having said that, if you want to try, the logical way to affect this is using constant memory. Define each argument as a __constant__
symbol at translation unit scope, then use the cudaMemcpyToSymbol function to copy values from the host to device constant memory.
Simple answer: no.
To be more elaborate: You need to send some signals from host to the GPU anyway, to launch the kernel itself. At this point, few more bytes of parameter data does not matter anymore.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With