I'm trying to make my 3D object have blurred edges using GLSL vertex & fragment shaders. I figured out I have to use Gaussian blur for this.
Currently it's just a TGA texture with semi-transparent fill using blendFunc blend
and rgbGen identity
. This texture is also not impacted by light (surfaceparm nolightmap
). And, yes, it's Quake 3 BSP @ Irrlicht engine. :)
Here's an image of how I'm trying to make it look like (this is actually not a light, just a brush to simulate light rays):
I've tried applying numerous shaders, but none of them work. Any ideas on where can I find something similar implemented? Or maybe there is another way to do this?
P.S. Maybe there is some other smart method to draw light rays?
Some ideas that come to mind...
I don't have any experience with the engine, but one way might be to compute the silhouette of the object and extrude it, applying a gradient to the extruded bit. You could leave interpolation linear, interpolated from each vertex, or maybe use a fragment shader to give it a more gaussian like falloff. The closest thing to extruding I've done is conservative rasterization, which is a little similar. Silhouettes are needed for stencil shadows/shadow volumes, although here I'm thinking of extruding the silhouette from the camera's point of view, not the light's.
Another way is rendering to a texture first, blur it and then add it back into the scene. Handling depth testing might be tricky here. Perhaps put back into the scene using a billboard.
A very cheap way might be to just use a pre-blurred texture billboard. Or maybe a few layers of criss-crossing fixed geometry with a blurry texture and additive blending. For additive blending, a bright light colour that fades to black at the edges - alpha is not necessary. Something like this?
I just found this approach too. Static geometry with additive blending again, but with the softer edges you're after. The normal of the cone geometry and distance from the tip is used to guess the thickness. The depth buffer is also used to bound the thickness in the event objects are inside the cone.
Heading to a more physically based lighting model, you could draw some bounding geometry and for each pixel work out the distance the viewing ray travels through the lights cone with a ray-cone intersection test. Then add light based on the distance. A simple addition scaled by distance would probably suffice without calculating multiple scattering and absorption approximations. This is quite close to deferred shading. Spotlights in particular.
Incidentally, if you want to work out per-pixel distance through a model I remember reading a nice way I think in GPU gems or one of nvidia's demo papers. Simply render eye space depth of all back faces with additive blending to a texture, then use subtractive blending with front faces only. The texture will contain per-pixel distance.
Extending the above, go the whole way and step through a shadow map, accumulating light. 1D Min-Max Mipmaps looks like an impressive way to speed it up.
Just for reference, because I don't think it'd work well in the case of a lamp: Volumetric Light Scattering as a Post-Process
links and images from random google searches, not definitive
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With