I'm planning to buy a serious GPU for running a parallel algorithm on (budget 2k-4k). Now I see everywhere supercomputers featuring nVidia Tesla GPU cards "made especially for GPGPU".
While this seems very nice on first sight, a better reading makes me have serious second thoughts on that: compared to e.g. a Radeon HD 7970, its performance (in terms of flops) is significantly lower, its cost price is significantly higher, and I can't seem to find any benchmark comparison between the Tesla and normal gaming GPUs.
I have found that the Tesla features ECC-memory. Is this the only difference? Or am I missing a deeper architectural difference between both? Perhaps relevant info: I will be using OpenCL, not Cuda.
There are two technical differences I know of between the brands, when you comparing similar cards.
1) Nvidia cards tend to have better double precision FLOPS than AMD - by a factor of 2 sometimes. AMD usually does better for single precision FLOPS.
2) ECC memory is available for both brands for the GDDR5 memory. The difference is that Nvidia uses ECC on the internal memory (registers and such) as well, where AMD does not.
In my opinion, choose the card based on your application. If you use more single than double precision, go AMD, otherwise Nvidia. If you need the ECC for high fault tolerance, maybe Nvidia is your best choice. Sometimes many cheaper cards does better than 1 or 2 top of the line cards - think of PCI-e bandwidth. Read up on benchmarks, and try to determine which card is best suited for your needs.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With