When we can get good speed with OpenGL since it uses texture memory and many inbuilt graphics functions(blending,mip map etc).
Why do we need OpenCL(slow beacuse of openCL buffers) interoperability with OpenGL just because the we can combine rendering with computation or are there any good advantages like performance.
I just wanted to know the main advantage of this and are there any published papers that shows that they got increase in performance by using OpenGL interoperability with OpenCL or any proof that showed increased in performance in terms of speed and quality.
What is the advantage of using OpenCL / OpenGL interoperability? It is fast. You take advantage of GPU parallelism and the data never leaves GPU memory. A sphere is able to be represented as 4 numbers.
The main difference between OpenGL and OpenCL is that OpenGL is used for graphics programming while OpenCL is used for heterogeneous computing. OpenGL is used in video game designing, simulation, etc. OpenGL helps to increase the performance of the system and allows parallel computing.
Key difference between OpenGL vs OpenCLOpenGL enables programming to do graphical operations and OpenCL allows programming to do the computation in multiple processors. Applications: OpenGL is applied to make UI animations to manage embedded video or used to build vector graphics.
The architecture of OpenGL is based on a client-server model. An application program written to use the OpenGL API is the "client" and runs on the CPU. The implementation of the OpenGL graphics engine (including the GLSL shader programs you will write) is the "server" and runs on the GPU.
OpenGL is just about realtime rasterized graphics. Since it's limited in scope it can be more optimized for that task and most of the hardware is also designed for that.
OpenCL is about general computing. Folding Proteins. Weather prediction. High frequency trading. Simulating neurons. Machine learning, SETI, Signal Processing, BitCoin mining and so on.
But there are plenty of cross over areas in between.
Firstly a lot of that stuff does have visual components. Scientists might want to be able too see/interact with the folded protein for example without having to copy all that data off the GPU RAM onto the CPU's memory, process it so it's in a visual format, then send it back to the GPU.
Also games can use OpenCL. Take something like Minecraft. If you where making Minecraft using modern OpenGL (rather than OpenGL 1.3 which is what Minecraft actually uses), you would want to just upload the raw map data to the GPU. Use a single pass with a geometry/tessellation shader to turn that data into cubes and other shapes then use transform feedback to capture the result (you only have to run it once).
But Minecraft also has all kinds of rules about how to update the map. Calculate lighting. Grow trees. Make water flow. Explode TNT and so on. That's stuff that you wouldn't really be able to do with shaders (or maybe not with a single pass). You can do it on the CPU (which Minecraft does) but if you have seen those videos of people setting off 1000's of TNT's at once you see this lag massively. And water flowing can cause lag for everyone on a large server. You can do it using OpenCL than send it to OpenGL but if they link together it's more efficient.
You could use OpenCL to fake new OpenGL techniques. For example if you had OpenCL but didn't have Geometry/Tessellation shaders you could do that in OpenCL (of course currently in reality it's not that useful since most systems with outdated OpenGL implementations will also lack OpenCL support). Are there things like tessellation shaders that would be useful but just don't exist?
There are plenty of visual effects that you can't accomplish within the OpenGL pipeline (at least not efficiently). One example is real-time ray-tracing. Another is particle simulations. Procedural/fractal terrain generation.
There are also other things that are of use to help with games/realtime graphics. For example Physics simulations can be moved off the CPU. Also it would be great if the rendering pipeline/scenegraph itself could be moved to the GPU. This would significantly reduce the number of calls between the CPU and the GPU and make it much more parallel. There are plenty more things like raycasting, AI, path finding and so on. Basically anything that you calculate only to be seen visually.
Finally I'm sure there is plenty of other stuff that no one has even though of before but is now possible. We are increasingly pushing for parallel algorithms as the hardware heads in that direction.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With