I've figure out how to translate the higher order functions map and filter into OpenGL code by using transform feedback (or by rendering to a texture). I'd also like to be able to use fold, but I have no idea how this would work. Let's assume that the operation is associative, so I don't care if it's a left fold or a right fold or some nondeterministic mix.
Examples of fold operations:
Or is this not feasible without OpenCL or CUDA?
If your operation is associative you can reduce/fold your data by repeatedly rendering to a smaller texture. In each pass you combine a number of texels from the previous pass. You read the data from the previous pass by binding it as a texture for your fragment shader.
For example if you wanted to compute the average of an image of 128x128 values you could first render to a texture of 64x64 where you average 4 texels in the input texture for each texel in the target texture.
A typical example is blurring an image.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With