I am doing research on CUDA programming.
i have the option to buy a single NVidia Tesla or buy around 4-5 NVidia 480?
what do you recommend?
A study that directly compared CUDA programs with OpenCL on NVIDIA GPUs showed that CUDA was 30% faster than OpenCL. OpenCL is rarely used for machine learning. As a result, the community is small, with few libraries and tutorials available.
In order to run a CUDA application, the system should have a CUDA enabled GPU and an NVIDIA display driver that is compatible with the CUDA Toolkit that was used to build the application itself.
The T4's performance was compared to V100-PCIe using the same server and software. Overall, V100-PCIe is 2.2x – 3.6x faster than T4 depending on the characteristics of each benchmark.
NVIDIA K80 Overview The K80 is a dual GPU unit which utilizes two GK210B chipsets. As a unit this card offers a total of 4992 CUDA cores clocked at 560 MHz coupled to 24GB of GDDR5 vRAM with a 384-bit memory interface and a 480 GB/s bandwidth.
Teslas are for more enterprise solutions (where you can expect the Tesla HW to be around for a long time), which the 480s will be here and then no longer in stock within a year (e.g. the GTX 295 is out of stock already). 4-5 480s have more horsepower than 1 Tesla, but that is only beneficial if you can actually leverage the multiple GPUs simultaneously and efficiently.
I work on Jacket, the GPU engine for MATLAB. Jacket has multi-GPU support and would be able to run some problems (say for instance, a bunch of for-loops) better on multiple 480s. However, other problems where multiple GPUs don't matter, will be better on the Tesla which has more memory and higher single card throughput.
Lot's of parameters to consider... good luck!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With