Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Making sense of some bit magic in the Go standard library

so I've been sifting through some code in the Go standard library trying to make sense of their image and color packages but found some code I just can't make sense of. from http://golang.org/src/pkg/image/color/color.go?s=794:834#L14

From my understanding it should convert 8bit pre-alpha-multiplied RGB values to 16 bit ones, saved in 32bit variables to stop them from overflowing when multiplying in image arthimetic.

What I can't make sense of are the lines like r |= r << 8 as I understand it this is equivalent to r = r*2^8+r because r << 8 inserts zeros to the right and they get or'ed with the old r.

For an input of r=255 this evaluates to 65535=2^16 - 1 which is as expected, but it doesn't make sense for the values in the middle, which don't really get mapped to something proportional in the bigger range. For example 127 get's mapped to 32639 while I'd expect 32767 to represent 127. What am I missing? I think it has something to do with the pre-alpha-multiplication...

 func (c RGBA) RGBA() (r, g, b, a uint32) {
    r = uint32(c.R)
    r |= r << 8
    g = uint32(c.G)
    g |= g << 8
    b = uint32(c.B)
    b |= b << 8
    a = uint32(c.A)
    a |= a << 8
    return
}
like image 467
Niklas Schnelle Avatar asked Dec 28 '22 01:12

Niklas Schnelle


1 Answers

No, what you're seeing actually makes sense.

Think of a single (red, for example) value. It dictates the amount of redness in the pixel and, as an 8-bit quantity, it's somewhere between 0 and 255. Thus you can represent all values of redness in the range.

If you just bitshifted that by eight bits (or multiplied by 256) to get a 16-bit colour value, you'd end up with a multiple of 256 somewhere between 0 and 255*256 (65280) inclusive.

While that scales the redness up relatively well, it doesn't distribute it properly across the full 16-bit range.

For example, 255 in the 8-bit range means maximum redness. Simply multiplying that by 256 doesn't give you the maximum amount of redness on the 16-bit scale, which would be 65535.

By multiplying by 256 and then adding the original (effectively multiplying by 257), it distributes correctly across the 0..65535 range.

It's the same as scaling up the single-digit integers 0..9 into the range 0..99. Multiplying by ten is one way but a better way is to multiply by ten and add the original value (or multiply by eleven):

n     n*10     n*10+n
-     ----     ------
0        0          0
1       10         11
2       20         22
3       30         33
4       40         44
5       50         55
6       60         66
7       70         77
8       80         88
9       90         99
like image 169
paxdiablo Avatar answered Feb 24 '23 03:02

paxdiablo