Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Passing an arbitrarily sized object to a fragment shader using a UniformBuffer in Glium

Tags:

rust

glsl

glium

My question came up while experimenting with a bunch of different techniques, none of which I have much experience with. Sadly, I don't even know whether I'm making a silly logic mistake, whether I'm using the glium crate wrong, whether I'm messing up in GLSL, etc. Regardless, I managed to start a new Rust project from scratch, working towards a minimal example showing my issue, and the problem reproduces on my computer at least.

The minimal example ends up being difficult to explain, though, so I first make an even more minimal example which does do what I want it to do, albeit by hacking bits and being limited to 128 elements (four times 32 bits, in a GLSL uvec4). From this, the step up to the version in which my problem arises is rather simple.

A working version, with simple uniform and bit-shifting

The program creates a single rectangle on the screen, with texture coordinates from 0.0 to 128.0 horizontally. The program contains one vertex shader for the rectangle, and a fragment shader that uses the texture coordinates to draw vertical stripes on the rectangle: if the texture coordinate (clamped to an uint) is odd, it draws one color, when the texture coordinate is even, it draws another color.

// GLIUM, the crate I'll use to do "everything OpenGL"
#[macro_use]
extern crate glium;

// A simple struct to hold the vertices with their texture-coordinates.
// Nothing deviating much from the tutorials/crate-documentation.
#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


// The vertex shader's source. Does nothing special, except passing the
// texture coordinates along to the fragment shader.
const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;

// The fragment shader. uses the texture coordinates to figure out which color to draw.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140

    in vec2 preserved_tex_coords;
    // FIXME: Hard-coded max number of elements. Replace by uniform buffer object
    uniform uvec4 uniform_data;
    out vec4 color;

    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        uint offset_in_vec = tex_x / 32u;
        uint uint_to_sample_from = uniform_data[offset_in_vec];
        bool the_bit = bool((uint_to_sample_from >> tex_x) & 1u);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;

// Logic deciding whether a certain index corresponds with a 'set' bit on an 'unset' one.
// In this case, for the alternating stripes, a trivial odd/even test.
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    // Sets up the vertices for a rectangle from -0.9 till 0.9 in both dimensions.
    // Texture coordinates go from 0.0 till 128.0 horizontally, and from 0.0 till
    // 1.0 vertically.
    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    // The rectangle will be drawn as a simple triangle strip using the vertices above.
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    // Compiling the shaders defined statically above.
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();

    // Some hackyy bit-shifting to get the 128 alternating bits set up, in four u32's,
    // which glium manages to send across as an uvec4.
    let mut uniform_data = [0u32; 4];
    for idx in 0..128 {
        let single_u32 = &mut uniform_data[idx / 32];
        *single_u32 = *single_u32 >> 1;
        if bit_should_be_set_at(idx) {
            *single_u32 = *single_u32 | (1 << 31);
        }
    }

    // Trivial main loop repeatedly clearing, drawing rectangle, listening for close event.
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: uniform_data },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}

But this isn't good enough...

This program works, and shows the rectangle with alternating stripes, but has the clear limitation of being limited to 128 stripes (or 64 stripes, I guess. The other 64 are "the background of the rectangle"). To allow arbitrarily many stripes (or, in general, to pass arbitrarily much data to a fragment shader), uniform buffer objects can be used, which glium exposes. The most relevant example in the glium repo sadly fails to compile on my machine: the GLSL version is not supported, the buffer keyword is a syntax error in the supported versions, compute shaders in general are not supported (using glium, on my machine), and neither are headless render contexts.

A not-so-much working version, with buffer uniform

So, with no way of starting from that example, I had to start from scratch using the documentation. For the example above, I came up with the following:

// Nothing changed here...
#[macro_use]
extern crate glium;

#[derive(Copy, Clone)]
struct Vertex {
    position: [f32; 2],
    tex_coords: [f32; 2],
}

implement_vertex!(Vertex, position, tex_coords);


const VERTEX_SHADER_SOURCE: &'static str = r#"
    #version 140

    in vec2 position;
    in vec2 tex_coords;
    out vec2 preserved_tex_coords;

    void main() {
        preserved_tex_coords = tex_coords;
        gl_Position = vec4(position, 0.0, 1.0);
    }
"#;
// ... up to here.

// The updated fragment shader. This one uses an entire uint per stripe, even though only one
// boolean value is stored in each.
const FRAGMENT_SHADER_SOURCE: &'static str =  r#"
    #version 140
    // examples/gpgpu.rs uses
    //     #version 430
    //     buffer layout(std140);
    // but that shader version is not supported by my machine, and the second line is
    // a syntax error in `#version 140`

    in vec2 preserved_tex_coords;

    // Judging from the GLSL standard, this is what I have to write:
    layout(std140) uniform;
    uniform uniform_data {
        // TODO: Still hard-coded max number of elements, but now arbitrary at compile-time.
        uint values[128];
    };
    out vec4 color;

    // This one now becomes much simpler: get the coordinate, clamp to uint, index into
    // uniform using tex_x, cast to bool, choose color.
    void main() {
        uint tex_x = uint(preserved_tex_coords.x);
        bool the_bit = bool(values[tex_x]);
        color = vec4(the_bit ? 1.0 : 0.5, 0.0, 0.0, 1.0);
    }
"#;


// Mostly copy-paste from glium documentation: define a Data type, which stores u32s,
// make it implement the right traits
struct Data {
    values: [u32],
}

implement_buffer_content!(Data);
implement_uniform_block!(Data, values);


// Same as before
fn bit_should_be_set_at(idx: usize) -> bool {
    idx % 2 == 0
}

// Mostly the same as before
fn main() {
    use glium::DisplayBuild;
    let display = glium::glutin::WindowBuilder::new().build_glium().unwrap();

    let vertices_buffer = glium::VertexBuffer::new(
        &display,
        &vec![Vertex { position: [ 0.9, -0.9], tex_coords: [  0.0, 0.0] },
              Vertex { position: [ 0.9,  0.9], tex_coords: [  0.0, 1.0] },
              Vertex { position: [-0.9, -0.9], tex_coords: [128.0, 0.0] },
              Vertex { position: [-0.9,  0.9], tex_coords: [128.0, 1.0] }]).unwrap();
    let indices_buffer = glium::IndexBuffer::new(&display,
                                                 glium::index::PrimitiveType::TriangleStrip,
                                                 &vec![0u8, 1u8, 2u8, 3u8]).unwrap();
    let shader_program = glium::Program::from_source(&display,
                                                     VERTEX_SHADER_SOURCE,
                                                     FRAGMENT_SHADER_SOURCE,
                                                     None).unwrap();


    // Making the UniformBuffer, with room for 128 4-byte objects (which u32s are).
    let mut buffer: glium::uniforms::UniformBuffer<Data> =
              glium::uniforms::UniformBuffer::empty_unsized(&display, 4 * 128).unwrap();
    {
        // Loop over all elements in the buffer, setting the 'bit'
        let mut mapping = buffer.map();
        for (idx, val) in mapping.values.iter_mut().enumerate() {
            *val = bit_should_be_set_at(idx) as u32;
            // This _is_ actually executed 128 times, as expected.
        }
    }

    // Iterating again, reading the buffer, reveals the alternating 'bits' are really
    // written to the buffer.

    // This loop is similar to the original one, except that it passes the buffer
    // instead of a [u32; 4].
    loop {
        use glium::Surface;
        let mut frame = display.draw();
        frame.clear_color(0.0, 0.0, 0.0, 1.0);
        frame.draw(&vertices_buffer, &indices_buffer, &shader_program,
                   &uniform! { uniform_data: &buffer },
                   &Default::default()).unwrap();
        frame.finish().unwrap();

        for e in display.poll_events() { if let glium::glutin::Event::Closed = e { return; } }
    }
}

I would expect this to produce the same striped rectangle (or give some error, or crash if something I did was wrong). Instead, it shows the rectangle, with the right-most quarter in solid bright red (i.e., "the bit seemed set when the fragment shader read it") and the remaining three quarters darker red (i.e., "the bit was unset when the fragment shader read it").

Update since original posting

I'm really stabbing in the dark here, so thinking it might be a low level bug with memory ordering, endianness, buffer over-/underrun, etc. I tried various ways of filling 'neighboring' memory locations with easily discernible bit-patterns (e.g. one bit in every three set, one in every four, two set followed by two unset, etc.). This did not change the output.

One of the obvious ways to get memory 'near' the uint values[128] is to put it into the Data struct, just in front of the values (behind the values is not allowed, as Data's values: [u32] is dynamically sized). As stated above, this does not change the output. However, putting a properly filled uvec4 inside the uniform_data buffer, and using a main function similar to the first example's does produce the original result. This shows that the glium::uniforms::UniformBuffer<Data> in se does work.

I've hence updated the title to reflect that the problem seems to lie somewhere else.

After Eli's answer

@Eli Friedman's answer helped me progress towards a solution, but I'm not quite there yet.

Allocating and filling a buffer four times as large did change the output, from a quarter filled rectangle to a fully filled rectangle. Oops, that's not what I wanted. My shader is now reading from the right memory words, though. All those words should have been filled up with the right bit pattern. Still, no part of the rectangle became striped. Since bit_should_be_set_at should set every other bit, I developed the hypothesis that what was going on is the following:

Bits: 1010101010101010101010101010101010101
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set

To test this hypothesis, I changed bit_should_be_set_at to return true on multiples of 3, 4, 5, 6, 7 and 8. The results coincide with my hypothesis:

Bits: 1001001001001001001001001001001001001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000100010001000100010001000100010001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: all bits set

Bits: 1000010000100001000010000100001000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating four unset, one set.

Bits: 1000001000001000001000001000001000001
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating two unset, one set.

Bits: 1000000100000010000001000000100000010
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then repeating six unset, one set.

Bits: 1000000010000000100000001000000010000
Seen: ^   ^   ^   ^   ^   ^   ^   ^   ^   ^   
What it looks like: first bit set, then every other bit set.

Does this hypothesis make sense? And regardless: does it look like the issue is with setting the data up (at the Rust side), or with reading it back out (at the GLSL side)?

like image 799
Thierry Avatar asked Nov 10 '22 04:11

Thierry


1 Answers

The issue you're running into has to do with how uniforms are allocated. uint values[128]; doesn't have the memory layout you think it does; it actually has the same memory layout as uint4 values[128]. See https://www.opengl.org/registry/specs/ARB/uniform_buffer_object.txt sub-section 2.15.3.1.2.

like image 162
Eli Friedman Avatar answered Jan 04 '23 03:01

Eli Friedman