I tried to measure the elapsed time on Tesla (T10 processors) and cudaEventElapsedTime returns device not ready error. But when I tested it on Fermi (Tesla M2090), it gave me the result.
Can anyone tell me what is happening...
Here is my code
cudaError_t err;
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
err = cudaEventRecord(start, 0);
f(err != cudaSuccess) {
printf ("\n\n 1. Error: %s\n\n", cudaGetErrorString(err));
exit(1);
}
// actual code
cudaThreadSynchronize();
err = cudaEventRecord(stop, 0);
if(err != cudaSuccess) {
printf ("\n\n2. Error: %s\n\n", cudaGetErrorString(err));
exit(1);
}
err = cudaEventElapsedTime(&elapsed_time, start, stop);
f(err != cudaSuccess) {
printf ("\n\n 3. Error: %s\n\n", cudaGetErrorString(err));
exit(1);
}
It's because cudaEventRecord
is asynchronous. It finishes its execution immediately, regardless of the status. Asynchronous functions simply put an order on a "CUDA execution queue". When GPU finishes its current assignment, it pops the next order and executes it. It is all done in a separate thread, handled by the CUDA driver, separate of your program host thread.
cudaEventRecord
is an order which says more-or-less something like this: "When you are done all previous work, flag me in this variable".
If your host thread then asks for cudaEventElapsedTime
, but the GPU didn't finish its work yet, it gets confused and reports "not ready yet!". cudaEventSynchronize()
stalls the current host thread until the GPU reaches the cudaEventRecord
order that you placed earlier. After that you are guaranteed that cudaEventElapsedTime
will have a meaningful answer for you.
cudaThreadSynchronize()
is just a stronger tool: it stalls current thread until GPU finishes all assigned tasks, and not just those until the event.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With