Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using Multiple Vertex Buffers In DX10/DX11

I have a C++ DirectX 11 renderer that I have been writing.

I have written a COLLADA 1.4.1 loader to import COLLADA data for use in supporting skeletal animations.

I'm validating the loader at this point (and I've supported COLLADA before in another renderer I've written previously using different technology) and I'm running into a problem matching up COLLADA with DX10/11.

I have 3 separate vertex buffers of data:

A vertex buffer of Unique vertex positions. A vertex buffer of Unique normals. A vertex buffer of Unique texture coordinates.

These vertex buffers contain different array length (positions has 2910 elements, normals has more than 9000, and texture coordinates has roughly 3200.)

COLLADA provides a triangle list which gives me the indices into each of these arrays for a given triangle (verbose and oddly done at first, but ultimately it becomes simple once you've worked with it.)

Knowing that DX10/11 support multiple vertex buffer I figured I would be filling the DX10/11 index buffer with indices into each of these buffers * and * (this is the important part), these indices could be different for a given point of a triangle.

In other words, I could set the three vertex buffers, set the correct input layout, and then in the index buffer I would put the equivalent of:

l_aIndexBuffer[ NumberOfTriangles * 3 ]

for( i = 0; i < NumberOfTriangles; i++ )
{
    l_aIndexBufferData.add( triangle[i].Point1.PositionIndex )
    l_aIndexBufferData.add( triangle[i].Point1.NormalIndex )
    l_aIndexBufferData.add( triangle[i].Point1.TextureCoordinateIndex )
}

The documentation regarding using multiple vertex buffers in DirectX doesn't seem to give any information about how this affects the index buffer (more on this later.)

Running the code that way yield strange rendering results where I could see the mesh I had being drawn intermittently correctly (strange polygons but about a third of the points were in the correct place - hint - hint)

I figured I'd screwed up my data or my indices at this point (yesterday) so I painstakingly validated it all, and so I figured I was screwing upon my input or something else. I eliminated this by using the values from the normal and texture buffers to alternatively set the color value used by the pixel shader, the colors were correct so I wasn't suffering a padding issue.

Ultimately I came to the conclusion that DX10/11 must be expect the data ordered in a different fashion, so I tried storing the indices in this fashion:

indices.add( Point1Position index )
indices.add( Point2Position index )
indices.add( Point3Position index )
indices.add( Point1Normal index )
indices.add( Point2Normal index )
indices.add( Point3Normal index )
indices.add( Point1TexCoord index )
indices.add( Point2TexCoord index )
indices.add( Point3TexCoord index )

Oddly enough, this yielded a rendered mesh that looked 1/3 correct - hint - hint.

I then surmised that maybe DX10/DX11 wanted the indices stored 'by vertex buffer' meaning that I would add all the position indices for all the triangles first, then all the normal indices for all the triangles, then all the texture coordinate indices for all the triangles.

This yielded another 1/3 correct (looking) mesh.

This made me think - well, surely DX10/11 wouldn't provide you with the ability to stream from multiple vertex buffers and then actually expect only one index per triangle point?

Only including indices into the vertex buffer of positions yields a properly rendered mesh that unfortunately uses the wrong normals and texture coordinates.

It appears that putting the normal and texture coordinate indices into the index buffer caused erroneous drawing over the properly rendered mesh.

Is this the expected behavior?

Multiple Vertex Buffers - One Index Buffer and the index buffer can only have a single index for a point of a triangle?

That really just doesn't make sense to me.

Help!

like image 698
WTH Avatar asked Apr 23 '13 15:04

WTH


1 Answers

The very first thing that comes in my head:

All hardware that supports compute shaders (equal to almost all DirectX 10 and higher) also supports ByteAddressBuffers and most of it supports StructuredBuffers. So you can bind your arrays as SRVs and have random access to any of its elements in shaders.

Something like this (not tested, just pseudocode):

// Indices passed as vertex buffer to shader
// Think of them as of "references" to real data
struct VS_INPUT
{
    uint posidx;
    uint noridx;
    uint texidx;
}

// The real vertex data 
// You pass it as structured buffers (similar to textures)
StructuredBuffer<float3> pos : register (t0);
StructuredBuffer<float3> nor : register (t1);
StructuredBuffer<float2> tex : register (t2);


VS_OUTPUT main(VS_INPUT indices)
{
    // in shader you read data for current vertex
    float3 pos = pos[indices.posidx];
    float3 nor = nor[indices.noridx];
    float2 tex = tex[indices.texidx];

    // here you do something
}

Let's call that "compute shader approach". You must use DirectX 11 API.

Also you can bind your indices in same fashion and do some magic in shaders. In this case you need to find out current index id. Probably you can take it from SV_VertexId.

And probably you can workaround these buffers and bind data somehow else ( DirectX 9 compatible texture sampling! O_o ).

Hope it helps!

like image 121
Ivan Aksamentov - Drop Avatar answered Oct 23 '22 21:10

Ivan Aksamentov - Drop