I have seen the following macro being used in OpenGL VBO implementations:
#define BUFFER_OFFSET(i) ((char *)NULL + (i)) //... glNormalPointer(GL_FLOAT, 32, BUFFER_OFFSET(x));
Could you provide a little detail on how this macro works? Can it be replaced with a function? More exactly, what is the result of incrementing a NULL pointer?
There is no "NULL" for integers. The NULL is a special value because it is not a valid pointer value. Hence, we can use it as a semaphore to indicate that "this pointer does not point to anything (valid)". All values in an integer are valid, unless your code assumes otherwise.
Java primitive types (such as int , double , or float ) cannot have null values, which you must consider in choosing your result expression and host expression types.
NULL means nothing at all when used with integers because 0 is a valid integer value. So it makes no sense to assign NULL to an integer. c and c++ languages do not have any way to tell us that an integer is uninitialized, or has no value at all. 2 0.
As you know, a value type cannot be assigned a null value. For example, int i = null will give you a compile time error. C# 2.0 introduced nullable types that allow you to assign null to value type variables.
Let's take a trip back through the sordid history of OpenGL. Once upon a time, there was OpenGL 1.0. You used glBegin
and glEnd
to do drawing, and that was all. If you wanted fast drawing, you stuck things in a display list.
Then, somebody had the bright idea to be able to just take arrays of objects to render with. And thus was born OpenGL 1.1, which brought us such functions as glVertexPointer
. You might notice that this function ends in the word "Pointer". That's because it takes pointers to actual memory, which will be accessed when one of the glDraw*
suite of functions is called.
Fast-forward a few more years. Now, graphics cards have the ability to perform vertex T&L on their own (up until this point, fixed-function T&L was done by the CPU). The most efficient way to do that would be to put vertex data in GPU memory, but display lists are not ideal for that. Those are too hidden, and there's no way to know whether you'll get good performance with them. Enter buffer objects.
However, because the ARB had an absolute policy of making everything as backwards compatible as possible (no matter how silly it made the API look), they decided that the best way to implement this was to just use the same functions again. Only now, there's a global switch that changes glVertexPointer
's behavior from "takes a pointer" to "takes a byte offset from a buffer object." That switch being whether or not a buffer object is bound to GL_ARRAY_BUFFER
.
Of course, as far as C/C++ is concerned, the function still takes a pointer. And the rules of C/C++ do not allow you to pass an integer as a pointer. Not without a cast. Which is why macros like BUFFER_OBJECT
exist. It's one way to convert your integer byte offset into a pointer.
The (char *)NULL
part simply takes the NULL pointer (which is usually a void*
in C and the literal 0 in C++) and turns it into a char*
. The + i
just does pointer arithmetic on the char*
. Because the null pointer usually has a zero address, adding i
to it will increment the byte offset by i
, thus generating a pointer who's value is the byte offset you passed in.
Of course, the C++ specification lists the results of BUFFER_OBJECT as undefined behavior. By using it, you're really relying on the compiler to do something reasonable. After all, NULL
does not have to be zero; all the specification says is that it is an implementation-defined null pointer constant. It doesn't have to have the value of zero at all. On most real systems, it will. But it doesn't have to.
That's why I just use a cast.
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, (void*)48);
It's not guaranteed behavior either way (int->ptr->int conversions are conditionally supported, not required). But it's also shorter than typing "BUFFER_OFFSET". GCC and Visual Studio seem to find it reasonable. And it doesn't rely on the value of the NULL macro.
Personally, if I were more C++ pedantic, I'd use a reinterpret_cast<void*>
on it. But I'm not.
Or you can ditch the old API and use glVertexAttribFormat
et. al., which is better in every way.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With