For my work it's particularly interesting to do integer calculations, which obviously are not what GPUs were made for. My question is: Do modern GPUs support efficient integer operations? I realize this should be easy to figure out for myself, but I find conflicting answers (for example yes vs no), so I thought it best to ask.
Also, are there any libraries/techniques for arbitrary precision integers on GPUs?
First, you need to consider the hardware you're using: GPU devices performance widely differs from a constructor to another.
Second, it also depends on the operations considered: for example adds might be faster than multiplies.
In my case, I'm only using NVIDIA devices. For this kind of hardware: the official documentation announces equivalent performance for both 32-bit integers and 32-bit single precision floats with the new architecture (Fermi). Previous architecture (Tesla) used to offer equivalent performance for 32-bit integers and floats but only when considering adds and logical operations.
But once again, this may not be true depending on the device and instructions you use.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With