For the past month or so, I have been busting my behind trying to learn DirectX. So I've been mixing back back and forth between DirectX 9 and 10. One of the major changes I've seen in the two is how to process vectors in the graphics card.
One of the drastic changes I notice is how you get the GPU to recognize your structs. In DirectX 9, you define the Flexible Vertex Formats.
Your typical set up would be like this:
#define CUSTOMFVF (D3DFVF_XYZRHW | D3DFVF_DIFFUSE)
In DirectX 10, I believe the equivalent is the input vertex description:
D3D10_INPUT_ELEMENT_DESC layout[] = {
{"POSITION",0,DXGI_FORMAT_R32G32B32_FLOAT, 0 , 0,
D3D10_INPUT_PER_VERTEX_DATA, 0},
{"COLOR",0,DXGI_FORMAT_R32G32B32A32_FLOAT, 0 , 12,
D3D10_INPUT_PER_VERTEX_DATA, 0}
};
I notice in DirectX 10 that it is more descriptive. Besides this, what are some of the drastic changes made, and is the HLSL syntax the same for both?
I would say there's no radical changes in the HLSL syntax itself between DX9 and DX10 (and by extension DX11).
As codeka said, changes are more a matter of cleaning the API and a road toward generalization (for the sake of GPGPU). But there are indeed noticable differences:
Noticable differences:
To pass constant to the shaders, you now have to go through Constant Buffers.
A Common-Shader Core: all types of shader have access to the same set of intrinsic functions (with some exceptions like for GS stage). Integer and bitwise operations are now fully IEEE-compliant (and not emulated via floating point). You have now access to binary casts to interpret an int as a float, a float as an uint etc..
Textures and Samplers have been dissociated. You now use syntax g_myTexture.Sample( g_mySampler, texCoord )
instead of tex2D( g_mySampledTexture, texCoord )
Buffers: a new kind of resource for accessing data that need no filtering in a random access way, using the new Object.Load
function.
System-Value Semantics: a generalization and extensions of POSITION
, DEPTH
, COLOR
semantics, that are now SV_Position
, SV_Depth
, SV_Target
and add of per stage new semantics like SV_InstanceID
, SV_VertexId
, etc.
That's all what I see for now. If something new pops up of my mind I will update my answer.
The biggest change I've noticed between DX9 and DX10 is the fact that under DX10 you need to set an entire renderstate block where in DX9 you could change individual states. This broke my architecture somewhat because I was rather relying on being able to make a small change and leave all the rest of the states the same (This only really becomes a problem when you set states from a shader).
The other big change is the fact that under DX10 vertex declarations are tied to a compiled shader (in CreateInputLayout). Under DX9 this wasn't the case. You just set a declaration and set a shader. Under DX10 you need to create a shader then create an input layout attached to a given shader.
As codeka points out the D3DVERTEXELEMENT9 has been the recommended way to create shader signatures since DX9 was introduced. FVF was already depreciated and through FVF you are unable to do things like set up a tangent basis. Vertex layours are far far more powerful and don't cause you to get fixed to a layout. You can put the vertex elements wherever you like.
If you want to know more about DX9 input layouts then i suggest you start with MSDN.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With