Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multiple shaders vs multiple techniques in DirectX

I'm going through all the Rastertek DirectX tutorials which by the way are very good, and the author tends to use multiple shaders for different things. In one of the later tutorials he even introduces a shader manager class.

Based on some other sources though I believe that it would be more efficient to use a single shader with multiple techniques instead. Are multiple shaders in the tutorials used for simplicity or are there some scenarios where using multiple shaders would be better then a single big one?

like image 698
jaho Avatar asked Jan 11 '13 03:01

jaho


2 Answers

I guess in the tutorials they use them for simplicity.

Grouping them in techniques or separately is a design decision. There are scenarios where having multiple shaders is beneficial as you can combine them as you like.

As of DirectX 11 in Windows 8, D3DX Library is deprecated so you will find out that it changes. You can see an example of this in the source code of DirectX Tool Kit: http://directxtk.codeplex.com/ and how they handled their effects.

Normally you will have different Vertex Shader, Pixel Shaders, etc in memory; techniques tend to join them as one, so when you compile the Shader File, for that technique a specific Vertex and Pixel Shader is compiled. Your Effect Objects is handling what Vertex/Pixel Shader the device is been set when an X Technique with a Y Pass is chosen.

You could do this manually, for example, only compile the pixel shader and set it to the device.

like image 65
Carlos Ch Avatar answered Nov 12 '22 11:11

Carlos Ch


Mostly answer would be : it depends.

Effects framework gives a big advantage that you can set your whole pipeline in one go using Pass->Apply, which can make things really easy, but can lead to pretty slow code if not used properly, which is probably why microsoft decided to deprecate it, but you can do as bad or even worse using multiple shaders, directxtk being a pretty good example of that actually (it's ok only for phone development).

In most cases effect framework will incur a few extra api calls that you could avoid using separate shaders (which i agree if you're draw call bound can be significant, but then you should look at optimizing that part with culling/instancing techniques). Using separate shaders you have to handle all state/constant buffer management yourself, and probably do it in a more efficient way if you know what you are doing.

What I really like about fx framework is the very nice reflection, and the use of semantics, which at a design stage can be really useful (for example, if you do float4x4 tP : PROJECTION, your engine can automatically bind camera projection to the shader).

Also layout validation at compile time between shader stages is really handy for authoring (fx framework).

One big advantage of separate shaders is you can easily swap only the stages you need, so you can save a decent amount of permutations, without touching the rest of the pipeline.

like image 34
mrvux Avatar answered Nov 12 '22 11:11

mrvux