Basically that's it, why does glBufferData take a pointer instead of an int? This arg is supposed to be the size of the buffer object, so why not GLsizei?
OpenGL doc on glBufferData https://www.opengl.org/sdk/docs/man/html/glBufferData.xhtml
When vertex buffer objects were introduced via the OpenGL extension mechanism, a new type GLsizeiptrARB
was created and the following rationale was provided:
What type should <offset> and <size> arguments use?
RESOLVED: We define new types that will work well on 64-bit systems, analogous to C's "intptr_t". The new type "GLintptrARB" should be used in place of GLint whenever it is expected that values might exceed 2 billion. The new type "GLsizeiptrARB" should be used in place of GLsizei whenever it is expected that counts might exceed 2 billion. Both types are defined as signed integers large enough to contain any pointer value. As a result, they naturally scale to larger numbers of bits on systems with 64-bit or even larger pointers.
The offsets introduced in this extension are typed GLintptrARB, consistent with other GL parameters that must be non-negative, but are arithmetic in nature (not uint), and are not sizes; for example, the xoffset argument to TexSubImage*D is of type GLint. Buffer sizes are typed GLsizeiptrARB.
The idea of making these types unsigned was considered, but was ultimately rejected on the grounds that supporting buffers larger than 2 GB was not deemed important on 32-bit systems.
When this extension was accepted into core OpenGL, the extension-compliant type GLsizeiptrARB
for the type got a standardized name GLsizeiptr
which you see in the function signature today.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With