I keep seeing this constant pop up in various graphics header files
0.0039215689
It seems to have something to do with color maybe?
Here is the first hit on Google:
void RDP_G_SETFOGCOLOR(void) { Gfx.FogColor.R = _SHIFTR(w1, 24, 8) * 0.0039215689f; Gfx.FogColor.G = _SHIFTR(w1, 16, 8) * 0.0039215689f; Gfx.FogColor.B = _SHIFTR(w1, 8, 8) * 0.0039215689f; Gfx.FogColor.A = _SHIFTR(w1, 0, 8) * 0.0039215689f; } void RDP_G_SETBLENDCOLOR(void) { Gfx.BlendColor.R = _SHIFTR(w1, 24, 8) * 0.0039215689f; Gfx.BlendColor.G = _SHIFTR(w1, 16, 8) * 0.0039215689f; Gfx.BlendColor.B = _SHIFTR(w1, 8, 8) * 0.0039215689f; Gfx.BlendColor.A = _SHIFTR(w1, 0, 8) * 0.0039215689f; if(OpenGL.Ext_FragmentProgram && (System.Options & BRDP_COMBINER)) { glProgramEnvParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, 2, Gfx.BlendColor.R, Gfx.BlendColor.G, Gfx.BlendColor.B, Gfx.BlendColor.A); } } //...more like this
What does this number represent? Why does no one seem to declare it as a const?
I couldn't find anything on Google that explained it.
0.0039215689
is approximately equal to 1/255
.
Seeing that this is OpenGL, performance is probably important. So it's probably safe to guess that this was done for performance reasons.
Multiplying by the reciprocal is faster than repeatedly dividing by 255.
Side Note:
If you're wondering why such a micro-optimization isn't left to the compiler, it's because it is an unsafe floating-point optimization. In other words:
x / 255 != x * (1. / 255)
due to floating-point round-off errors.
So while modern compilers may be smart enough to do this optimization, they are not allowed to do it unless you explicitly tell them to via a compiler flag.
Related: Why doesn't GCC optimize a*a*a*a*a*a to (a*a*a)*(a*a*a)?
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With