Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL Dynamic Object Motion Blur

I've been following the GPU Gems 3 tutorial on how to blur based on camera movement. However I'm wanting to implement a blur based on object movement too. The solution is presented in the article (see quote below), however I'm curious as to how exactly to implement this.

At the moment I'm multiplying the object's matrix by the view-projection, then separately again for the previous-view-projection and then passing them into the pixel shader to calculate the velocity instead of just the view-projections.

If that is in fact the correct method, then why am I not simply able to pass in the model-view-projection? I would have assumed they would be the same value?

GPU Gems 3 Motion Blur

To generate a velocity texture for rigid dynamic objects, transform the object by using the current frame's view-projection matrix and the last frame's view-projection matrix, and then compute the difference in viewport positions the same way as for the post-processing pass. This velocity should be computed per-pixel by passing both transformed positions into the pixel shader and computing the velocity there.

like image 217
Aequitas Avatar asked Aug 17 '12 14:08

Aequitas


1 Answers

Check out my research I did a few months ago on this topic: https://slu-files.s3.us-east-1.amazonaws.com/Fragment_shader_dynamic_blur.pdf

standard rendering of tower smash
(source: stevenlu.net)

blur rendering of tower smash
(source: stevenlu.net)

Sadly I did not implement textured objects when producing this material, but do use your imagination. I am working on a game engine so when that finally sees the light of day in the form of a game, you can be sure that I'll come and post breadcrumbs here. It primarily addresses how to implement this effect in 2D, and in cases where objects do not overlap. There is not really a good way to handle using a fragment shader to "sweep" samples in order to generate "accurate" blur. While the effect approaches pixel-perfection as the sample count is cranked up, the geometry that must be generated to cover the sweep area has to be manually assembled using some "ugly" techniques.

In full 3D it's a rather difficult problem to know which pixels a dynamic object will sweep over during the course of a frame. Even with static geometry and a moving camera the solution proposed by the GPU Gems article is incorrect when moving past things quickly because it is unable to address that issue of requiring blending of the area swept out by something moving...

That said, if this approximation which neglects the sweep is sufficient (and it may be) then what you can do to extend to dynamic objects is to take their motion into account. You'll need to work out the details of course but look at lines 2 and 5 in the second code block on the article you linked: They are the current and previous screen space "positions" of the pixel. You simply have to somehow pass in the matrices that will allow you to compute the previous position of each pixel, taking into account the dynamic motion of your object.

It shouldn't be too bad. In the pass where you render your dynamic object you send in an extra matrix that represents its motion over the last frame.

Update: I found that this paper describes an elegant and performant approach that provides somewhat high quality physically correct blurring for a 3D pipeline. It'll be hard to do much better than this within the constraint of rendering the full scene no more than one time for performance reasons.

I noticed with some of the examples the quality of the velocity buffer could be better. for example a rotating wheel should have some curves in the velocity space. I believe if they can be set properly (may require custom fragment shaders to render the velocity out...) they will look intuitively correct like the spinning cube seen above from my 2D exploration into dynamic motion blurring.

like image 120
Steven Lu Avatar answered Oct 27 '22 01:10

Steven Lu