Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is there GLint and GLfloat? [duplicate]

I get that Open GL needs to use numbers, but why not just use regular ints and floats or the wrapper classes already in existance (whichever is necessary for the whole world of Open GL to fit together nicely)? Is there a difference besides the name and one being used exclusively in Open GL or are they pretty much the same thing with a different name?

like image 603
Slayer0248 Avatar asked Sep 07 '12 22:09

Slayer0248


1 Answers

Because an int is (waaaay oversimplifying here) 32 bits on a 32 bit system and 64 bits on a 64 bit system - so even just "an int" is not a universal concept. Keep in mind that the hardware running the graphics code is a different piece of hardware than your cpu, and the need for new types emerges. By using its own typedef, OpenGL can ensure that the right number of bits are packed in the right way when sending the data to your graphics card.

It would be possible to do this with conversion functions that abstract away the messiness of "different ints", but that would incur a performance penalty that is generally not acceptable when you're talking about every single number going to and from the graphics card.

tl;dr when using an "int", you're writing with your processor's hardware in mind. When using a "GLInt", you're writing with your graphics card's hardware in mind.

EDIT: as pointed out in the comments, on a 64 bit processor, int can (and probably will be) 32 bits for compatibility reasons. Historically, through 8, 16, and 32 bit hardware, it has been the native size of the processor, but technically, it's whatever the compiler feels like using when it creates the machine code. Props to @Nicol Bolas and @Mark Dickinson

like image 87
Matt Avatar answered Nov 16 '22 04:11

Matt