An electrical engineer recently cautioned me against using GPUs for scientific computing (e.g. where accuracy really matters) on the basis that there are no hardware safeguards like there are in a CPU. Is this true, and if so how common/substantial is the problem in typical hardware?
What CPU is best for scientific computing? There are two main choices: Intel Xeon (single or dual socket) and AMD Threadripper Pro / EPYC (which are based on the same technology). For the majority of cases our recommendation is single socket processors like Xeon-W and Threadripper Pro.
GPU computing is the use of a GPU (graphics processing unit) as a co-processor to accelerate CPUs for general-purpose scientific and engineering computing. The GPU accelerates applications running on the CPU by offloading some of the compute-intensive and time consuming portions of the code.
A GPU is a processor that is great at handling specialized computations. We can contrast this to the Central Processing Unit(CPU), which is great at handling general computations. CPUs power most of the computations performed on the devices we use daily. GPU can be faster at completing tasks than CPU.
Traditionally, GPUs have been used to accelerate memory-intensive calculations for computer graphics like image rendering and video decoding.
Actually, modern GPUs fit extremely well for scientific computing and many HPC applications are being at least partially ported to run on GPUs, for the sake of performance and energy efficiency. Unlike older GPUs, the modern ones (take NVIDIA's Fermi or Kepler architectures, for example) provide fully standardized IEEE-754 formats, for both single and double precision, so you should be able to use these just like you do on a modern CPU.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With