Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does HLSL have semantics?

Tags:

glsl

hlsl

In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?

Example: GLSL

vertex shader

varying vec4 foo
varying vec4 bar;

void main() {
  ...
  foo = ...
  bar = ...
}

fragment shader

varying vec4 foo
varying vec4 bar;

void main() {
  gl_FragColor = foo * bar;
}

Example: HLSL

vertex shader

struct VS_OUTPUT
{
   float4  foo : TEXCOORD3;
   float4  bar : COLOR2;
}

VS_OUTPUT whatever()
{
  VS_OUTPUT out;

  out.foo = ...
  out.bar = ...

  return out;
}

pixel shader

void main(float4 foo : TEXCOORD3,
          float4 bar : COLOR2) : COLOR
{
    return foo * bar;
}

I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?

Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?

I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?

like image 674
gman Avatar asked Feb 27 '14 09:02

gman


People also ask

What is semantics HLSL?

A semantic is a string attached to a shader input or output that conveys information about the intended use of a parameter. Semantics are required on all variables passed between shader stages. The syntax for adding a semantic to a shader variable is shown here (Variable Syntax (DirectX HLSL)).

What is HLSL written in?

HLSL is the C-like high-level shader language that you use with programmable shaders in DirectX.

What is the difference between GLSL and HLSL?

In GLSL, you apply modifiers (qualifiers) to a global shader variable declaration to give that variable a specific behavior in your shaders. In HLSL, you don't need these modifiers because you define the flow of the shader with the arguments that you pass to your shader and that you return from your shader.

What does SV_Target mean?

SV_Target is the semantic used by DX10+ for fragment shader color output. COLOR is used by DX9 as the fragment output semantic, but is also used by several shader languages for mesh data and vertex output semantics.


1 Answers

Simply, (old) glsl used this varying for variables naming (please note that varying is now deprecated).

An obvious benefit of semantic, you don't need the same variable names between stages, so the DirectX pipeline does matching via semantics instead of variable name, and rearranges data as long as you have a compatible layout.

If you rename foo by foo2, you need to replace this names in potentially all your shader (and eventually subsequent ones). With semantics you don't need this.

Also since you don't need an exact match, it allows easier separation between shader stages.

For example:

you can have a vertex shader like this:

struct vsInput
{
float4 PosO : POSITION;
float3 Norm: NORMAL;
float4 TexCd : TEXCOORD0;
};

struct vsOut
{
float4 PosWVP : SV_POSITION;
float4 TexCd : TEXCOORD0;
float3 NormView : NORMAL;
};

vsOut VS(vsInput input)
{ 
     //Do you processing here
}

And a pixel shader like this:

struct psInput
{
    float4 PosWVP: SV_POSITION;
    float4 TexCd: TEXCOORD0;
};

Since vertex shader output provides all the inputs that pixel shader needs, this is perfectly valid. Normals will be ignored and not provided to your pixel shader.

But then you can swap to a new pixel shader that might need normals, without the need to have another Vertex Shader implementation. You can also swap the PixelShader only, hence saving some API calls (Separate Shader Objects extension is the OpenGL equivalent).

So in some ways semantics provide you a way to encapsulate In/Out between your stages, and since you reference other languages, that would be an equivalent of using properties/setters/pointer addresses...

Speed wise there's no difference depending on naming (you can name semantics pretty much any way you want, except for system ones of course). Different layout position will do imply a hit tho (pipeline will reorganize for you but will also issue a warning, at least in DirectX).

OpenGL also provides Layout Qualifier which is roughly equivalent (it's technically a bit different, but follows more or less the same concept).

Hope that helps.

like image 96
mrvux Avatar answered Oct 11 '22 15:10

mrvux