this may be a simple question, but i wonder why pixel values can have float values? I'm tracking a target and get the mass center with contours() and moments() method and if i want i can get float values of them.
But why is this even possible? An image can't have a 0.1234 Pixel
We can represent pixels as either unsigned eight bit integers which would occupy a single byte of computer memory each, and they have a numeric range of 0 and 255. Or we can represent them as double precision floating point numbers—they occupy eight bytes of computer storage each.
In general, a pixel does not need to be 8-bit unsigned integer. A pixel can have any numerical type. Usually a pixel intensity represents an amount of light, or a density of some sort, but this is not always the case.
These pixel values denote the intensity of the pixels. For a grayscale or b&w image, we have pixel values ranging from 0 to 255. The smaller numbers closer to zero represent the darker shade while the larger numbers closer to 255 represent the lighter or the white shade.
Each pixel correspond to any one value. In an 8-bit gray scale image, the value of the pixel between 0 and 255. The value of a pixel at any point correspond to the intensity of the light photons striking at that point. Each pixel store a value proportional to the light intensity at that particular location.
A Mat
can hold float
values, as well as many other types. Examples of images having float` values are:
0, 255
for uchar
, but in range [0,1]
for float
. It's just a convention.float
results. This is not necessarily a ready-to-display image, but in this case is just a regular matrix, holding some values. Don't forget that an image is just a matrix whose values represent the pixel values.You can also have float
coordinates. This is the case of sub-pixel accuracy. The centroid of a blob may have float
coordinates. e.g. (5.1, 6.8)
. You can draw this point loosing a bit of precision with integer coordinates, e.g. (5, 7)
, but you may need the float
value for further computation.
I believe (if I did understand you properly) that you have discovered the world of interpolation. An image itself, is a continous plane with x
coordinates in the range of [0, width]
and y
coordinates in the range of [0, height]
. To show the image in a monitor, or store it in disk, the image is discretized in pixels.
If you are applying discrete operations, such as convolutions, additions or thresholds to an image, it is normal to think of it as a grid of values. However, specially in your case that you are tracking objects in an image, you should think of it as a continuous space. The center of mass of a object, wont probably lay in a discrete value, it will be some floating point coordinate in the above range (i.e. p = (50.5, 10.1)
), but that shouldn't be a problem.
If you want, you can also access the color (or gray-scale value) of the p = (50.5, 10.1)
pixel by using bilinear (or more complex) interpolation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With