Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does CUDA quietly downcasts double to float?

I'm looking into CUDA header file cuda/6.5.14/RHEL6.x/include/math_functions_dbl_ptx1.h and see that every arithmetic function that takes a double argument casts it into float:

static __forceinline__ double fabs(double a)
{
  return (double)fabsf((float)a);
}

...

static __forceinline__ double floor(double a)
{
  return (double)floorf((float)a);
}

Since I rely in essential way on double precision floating point (there are quite a few potentially catastrophic cancellations in the code) I have some trouble believing my own eyes.

Could you explain what's going on here?

like image 243
Michael Avatar asked Dec 06 '25 02:12

Michael


1 Answers

What you're looking at is a file that is used when compiling for a cc1.1 or cc1.2 device (on CUDA 6.5) that did not have native support for double arithmetic, and yes CUDA would "quietly" "demote" double to float. (The compiler would emit a warning when this was occurring.)

This behavior did not manifest itself on devices of compute capability 1.3 and higher, all of which have native support for double arithmetic.

CUDA 7 and 7.5 no longer support devices that have a compute capability less than 2.0, so this particular behavior could no longer manifest itself, and it becomes only of historical interest on newer CUDA toolkits. (And the file in question has been removed from these newer CUDA toolkits.)

For reference, when this "demotion" was occurring, the compiler would emit a warning of the following form:

ptxas /tmp/tmpxft_00000949_00000000-2_samplefilename.ptx, line 65; warning : Double is not supported. Demoting to float

If you don't see that warning in your compile output, the demotion is not occurring.

like image 140
Robert Crovella Avatar answered Dec 09 '25 18:12

Robert Crovella