Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Overview/showcase of shader techniques/uses for games

I am looking for resources that can provide me with a better understanding of what kind of things shaders are used for in games, what they can do, and maybe even more importantly, what they cannot. I understand how the graphics pipeline works and all that, and I have made some very basic shaders in GLSL (mostly just to replace the fixed-function pipeline functionality), but I don't yet fully understand which things are only possible with custom shaders, which things can be done more efficiently, etc. I have been able to find some examples of certain techniques, most notably lighting, but I am looking for a more higher-level overview of their usage.

Links to and explanations of certain interesting techniques, as opposed to an overview, are also appreciated (but less than an overview ;) ), preferrably in GLSL or pseudocode.

like image 707
Jonesy Avatar asked Apr 10 '11 13:04

Jonesy


1 Answers

Well considering that DirectX and OpenGL have both moved towards a shader-only (i.e. no fixed-function) system, the answer to your "which effects are only possible with shaders" question could be "everything".

Some techniques that I believe were not possible/feasible without programmable shaders (or by using very specific black-box APIs) though, are:

  1. Per-pixel lighting
  2. Shadow mapping.
  3. GPU skinning (i.e. matrix palette skinning) for animated meshes.
  4. Any of the various post-process effects that are common today: bloom, SSAO, depth of field, etc.
  5. Deferred shading.
  6. Implementing arbitrary/"other" lighting models like Oren–Nayar, Cook–Torrance, Rim-Lighting, etc.

and the list can go on, and I'm sure some people will disagree with my assessment that these couldn't be achieved with fixed-function functionality (through hacks or various fixed-functions extensions).

What it boils down to is, before programmable shaders, a given effect had to be implemented in the hardware/driver by the vendor and it and it had to be something that could be reasonably expressed through the API. Now you can execute effectively any user-defined code you want (within the constraints of the different shader stages and other limitations of the hardware) so you have the flexibility to greatly customize your rendering pipeline and invent new techniques as you see fit.

Take a look at the first couple GPU Gems books (which can be read for free on Nvidia's website) to get a feel for the types of techniques that were showing up once programmable hardware was available.

like image 140
eodabash Avatar answered Sep 28 '22 18:09

eodabash