Is it possible to use shader for calculating some values and then return them back for further use?
For example I send mesh down to GPU, with some parameters about how it should be modified(change position of vertices), and take back resulting mesh? I see that rather impossible because I haven't seen any variable for comunication from shaders to CPU. I'm using GLSL so there are just uniform, atributes and varying. Should I use atribute or uniform, would they be still valid after rendering? Can I change values of those variables and read them back in CPU? There are methods for mapping data in GPU but would those be changed and valid?
This is the way I'm thinking about this, though there could be other way, which is unknow to me. I would be glad if someone could explain me this, as I've just read some books about GLSL and now I would like to program more complex shaders, and I wouldn't like to relieve on methods that are impossible at this time.
Thanks
Great question! Welcome to the brave new world of General-Purpose Computing on Graphics Processing Units (GPGPU).
What you want to do is possible with pixel shaders. You load a texture (that is: data), apply a shader (to do the desired computation) and then use Render to Texture to pass the resulting data from the GPU to the main memory (RAM).
There are tools created for this purpose, most notably OpenCL and CUDA. They greatly aid GPGPU so that this sort of programming looks almost as CPU programming.
They do not require any 3D graphics experience (although still preferred :) ). You don't need to do tricks with textures, you just load arrays into the GPU memory. Processing algorithms are written in a slightly modified version of C. The latest version of CUDA supports C++.
I recommend to start with CUDA, since it is the most mature one: http://www.nvidia.com/object/cuda_home_new.html
This is easily possible on modern graphics cards using either Open CL, Microsoft Direct Compute (part of DirectX 11) or CUDA. The normal shader languages are utilized (GLSL, HLSL for example). The first two work on both Nvidia and ATI graphics cards, cuda is nvidia exclusive.
These are special libaries for computing stuff on the graphics card. I wouldn't use a normal 3D API for this, althought it is possible with some workarounds.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With