I'm writing a application that requires additions of 5000 length float vectors many times a second. Is it possible to make the GPU perform the calculations, an how would that be done? i need it to run on both windows and linux (later a raspberry pi), so CUDA is out of the question as i don't have a Nvidia graphics card.
You can't directly talk to Nvidia GPUs from Go. You'd need to use cgo to call C library from Go. See slide #8 in this presentation for one example (also see full talk).
There're some Go packages that wrap cgo part I mentioned above, into Go a library. mumax is one such package.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With