I have integrated bloom HDR rendering using OpenGL and GLSL... At least I think! I'm not really sure about the result.
I followed a tutorial from intel website:
https://software.intel.com/en-us/articles/compute-shader-hdr-and-bloom
And about Gaussian blur effect I follow scrupulously all the advices concerning the performance on the following website:
https://software.intel.com/en-us/blogs/2014/07/15/an-investigation-of-fast-real-time-gpu-based-image-blur-algorithms
According to the first website:
"The bright pass output is then downscaled by half 4 times. Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output. The final output is a ¼ size bloom which is up sampled and added to the HDR output before tone mapping."
Here's the bloom pipeline (the pictures above have been taken from NSight NVIDIA Debugger).
The resolution of the window in my test is 1024x720 (for the need of this algorithm this resolution will be downscaled 4 times).
Step 1:
Lighting pass (blending of material pass + shadow mask pass + skybox pass):
Step 2:
Extracting hight light information into a bright pass (To be precise, 4 mipmaps textures are generated ("The bright pass output is then downscaled by half 4 times" -> 1/2, 1/4, 1/8 and finally 1/2)):
Step 3:
"Each of the downscaled bright pass outputs are blurred with a separable Gaussian filter and then added to the next higher resolution bright pass output."
I want to be precise that the bilinear filtering is enable (GL_LINEAR) and the pexilization on the pictures above are the result of the resizing of the texture onto the NSight debugger window (1024x720).
a) Resolution 1/16x1/16 (64x45)
"1/16x1/16 blurred output"
b) Resolution 1/8x1/8 (128x90)
"1/8x1/8 downscaled bright pass, combined with 1/16x1/16 blurred output"
"1/8x1/8 blurred output"
c) Resolution 1/4x1/4 (256x180)
"1/4x1/4 downscaled bright pass, combined with 1/8x1/8 blurred output"
" 1/4x1/4 blurred output"
d) Resolution 1/2x1/2 (512x360)
"1/2x1/2 downscaled bright pass, combined with 1/4x1/4 blurred output"
"1/2x1/2 blurred output"
To target the desired level of mipmap I use FBO resizing (but maybe it would be smarter to use separate FBOs already sized at the initialization rather than resize the same one several times. What do you think of this idea ?).
Step 4:
Tone mapping render pass:
Until here I would like to have an external advice on my work. Is it correct or not ? I'm not really sure about the result espacially about the step 3 (the downscaling and bluring part).
I think the bluring effect is not very pronounced! However I use a convolution kernel 35x35 (it would be sufficient, I think :)).
But I'm really intrigued by an article on a pdf. Here's the presentation of the bloom pipeline (the presentation is pretty the same than the one I applied).
Link:
https://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=https%3A%2F%2Ftransporter-game.googlecode.com%2Ffiles%2FRealtimeHDRImageBasedLighting.pdf&ei=buBhVcLmA8jaUYiSgLgK&usg=AFQjCNFfbP9L7iEiGT6gQNW6dB2JFVcTmA&bvm=bv.93990622,d.d24
As you can see on the picture that the blur bleeding effect is so much stronger than mine! Do you think the author use several convolution kernels (higher resolutions) ?
The first thing I don't understand is how the gaussian blur algorithm make appears other colors different than white (grey-scale values) on the third picture. I looked very closely (high zoom) onto the bright picture (the second one) and all the pixels seems to be close to white or white (grayscale). One thing is sure: there is no blue or orange pixels on the bright texture. So how can we explain a such transition from picture 2 to picture 3? It's very strange for me.
The second thing I don't understand is the high difference of blur bleeding effect between the pictures 3, 4, 5 and 6! In my presentation I use 35x35 convolution kernel and the final result is close to the third picture here.
How can you explain a such difference?
PS: Note that I use GL_HALF_FLOAT and GL_RGBA16F pixel internal format to initialize the bloom render pass texture (all the other render passes are initialized as GL_RGBA and GL_FLOAT data type).
Is something wrong with my program ?
Thank you very much for your help!
Blurred small-res textures don't seem blurred enough. I think there is somewhere a problem regarding the width of the filter (not number of samples, but distance between samples) or framebuffer size.
Let's say that you have 150x150 original fbo, a a 15x15 downscaled version for bloom. And that you use 15x15 blur filter.
Blurred high-res version would affect 7px stroke around bright parts. But while blurring low-res image, the width of the kernel would practically affect an entire image area. At low-res, 7px stroke means - an entire image area. So all pixels in blurred low-res version would have some contribution to the final composed image. So, high-res blurred image would contributed with its blur for the 7px stroke around bright parts, while low-res blurred image would make quite a significant difference over an entire image area.
Your low-res images just don't seem well blurred, cause they're contribution still remains within 35/2px stroke around bright parts, which is wrong.
I hope I managed to explain what is wrong. What to change exactly, probably viewport size while blurring low-res images, but I simply can't be 100% sure.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With