I have a M*N integer matrix, which I need to traverse and compute this for every element M[i][j]:
The integer which appears most often in the submatrix from (i-k,j-k) to (i+k, j+k).
So the result is a matrix with each cell being the dominant number around [i,j] in original matrix.
The matrix could be very large, and I need to do this operation in a tight loop, so I want to minimize the operation time by parallel computing.
I know GPU is good at matrix multiplication, but it looks like this cannot be reduced to a simple matrix multiplication. (or can it?)
Is it possible to compute each cell in parallel on GPU? And if it is, I want to implement this in iOS, what programming interface should I use, Metal? OpenGL?
Yes you can do this computation on GPU.
Metal seems to be for both graphical and general purpose computation. So you should be able to use it for your needs (here is an article introducing it: http://memkite.com/blog/2014/12/15/data-parallel-programming-with-metal-and-swift-for-iphoneipad-gpu/)
Accelerate can also fit our needs.
Hope this help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With