Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Double precision floating point in CUDA

Does CUDA support double precision floating point numbers?

Also, what are the reasons for the same?

like image 536
cuda-dev Avatar asked May 12 '10 08:05

cuda-dev


People also ask

Does Cuda support double precision?

Devices of compute capability 2.0 and later are capable of single and double precision arithmetic following the IEEE 754 standard, and have hardware units for performing fused multiply-add in both single and double precision. Take advantage of the CUDA math library functions.

What is meant by double precision floating point?

Double-precision floating-point format (sometimes called FP64 or float64) is a computer number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.

How do you calculate a double precision floating point?

Short answer: the max value for a double-precision value (assuming IEEE 754 floating-point) is exactly 2^1024 * (1 - 2^-53). For a single-precision value it's 2^128 * (1 - 2^-24).

What is GPU double precision?

Many applications require higher-accuracy mathematical calculations. In these applications, data is represented by values that are twice as large (using 64 binary bits instead of 32 bits). These larger values are called double-precision (64-bit). Less accurate values are called single-precision (32-bit).


1 Answers

If your GPU has compute capability 1.3 then you can do double precision. You should be aware though that 1.3 hardware has only one double precision FP unit per MP, which has to be shared by all the threads on that MP, whereas there are 8 single precision FPUs, so each active thread has its own single precision FPU. In other words you may well see 8x worse performance with double precision than with single precision.

like image 91
Paul R Avatar answered Dec 02 '22 16:12

Paul R