I'm trying to understand how simple algos for resizing images work.
I've implemented Bilinear and Bicubic interpolation methods and they both work perfectly when used to upscale images. However if I'm using the same methods for downscaling(let's say 0.25x of original one) the image starts to drop tiny details.
For example(wires arent a straight lines anymore):
I understand what causes this effect. So my question is how it can be fixed? I mean if I'm using the same algos in different software the final image doesn't have that kind of distortion at this type of details(in my case - wires).
The only solution I've found so far is to downscale image step by step at rate not more than 10-20% of its original size per step, but it's of course much slower.
Thanks for your attention.
Interestingly, few textbooks on image processing warn about this: for downsampling, interpolation is both overkill and inappropriate. Inappropriate because by the Shannon theorem, you shouldn't sample below the Nyquist frequency, or you get nasty aliasing effects. Overkill because subpixel accuracy is useless when the pixels are tiny.
There are two ways to obtain correct results:
lowpass filtering (blur) followed by decimation,
averaging of blocks of pixels.
The latter is more efficient from a computational point of view, but a little less flexible.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With