Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.)

According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION:

  • "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point

  • "if a geometry shader is active and mode​ is incompatible with [...]" -- nope, no geometry shaders as of now.

Furthermore:

  1. I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too.

  2. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway).

  3. It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...)

For what it's worth, here's the relevant routine excerpted from the render loop, in Golang:

func (me *TMesh) render () {
    curMesh = me
    curTechnique.OnRenderMesh()
    gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf)
    if me.glElemBuf > 0 {
        gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf)
        gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
        gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil))
        gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0)
    } else {
        gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
        /* BOOM! */
        gl.DrawArrays(me.glMode, 0, me.glNumVerts)
    }
    gl.BindBuffer(gl.ARRAY_BUFFER, 0)
}

So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here.

What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

like image 624
metaleap Avatar asked Nov 03 '22 13:11

metaleap


1 Answers

I have a wild guess.

As I understand it, all OpenGL calls must happen on the same thread. This restriction does not mix well with goroutines, since the same goroutine can run on different threads at different points in its execution.

To get around this problem, you need to lock your main goroutine (or whatever goroutine's doing OpenGL calls) to its current thread as soon as it starts, before initializing OpenGL.

import "runtime"

func main() {
    runtime.LockOSThread()

    ...
}

The reason you're seeing inconsistent results could be explained by implementation differences.

like image 79
Evan Shaw Avatar answered Nov 09 '22 02:11

Evan Shaw