Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What factors are best for image resizing? [closed]

Let's say I have an image that is 3000 px wide. I know (at least I think I do) that if I downsize it to be 1500 px wide (that is, 50%), the result will be better than if I resize it to be 1499 or 1501 px wide.

I suppose that will be so regardless of the algorithm used. But I have no solid proof, and the reason I'd like to have proof is that it could help me decide less obvious cases.

For instance, reducing it to 1000 px (one third) will also presumably work ok. But what about 3/4? Is it better than 1/2? It certainly can hold more detail, but will part of it not become irretrievably fuzzy? Is there a metric for the 'incurred fuzziness' which can be offset against the actual resolution?

For instance, I suppose such a metric would clearly show 3000 -> 1501 to be worse than 3000 -> 1500, more than is gained by 1501 > 1500.

Intuitively, 1/n resizes where n is a factor of the original size would yield the best results, followed by n/m where the numbers were the lowest possible. Where the original size (both X and Y) were not a multiple of the denominator, the results are expected to be poorer, tho I have no proof of that.

These issues must have been studied by someone. People have devised all sorts of complex algorithms, they must take this somehow in consideration. But I don't even know here to ask these questions. I ask them here because I've seen related ones with good answers. Thanks for your attention and please excuse the contrived presentation.

like image 736
entonio Avatar asked Oct 23 '25 14:10

entonio


1 Answers

The algorithm is key. Here's a list of common ones, from lowest quality to highest. As you get higher in quality, the exact ratio of input size to output size makes less of a difference. By the end of the list you shouldn't be able to tell the difference between resizing to 1499 or 1500.

  1. Nearest Neighbor, i.e. keeping some pixels and dropping others.
  2. Bilinear interpolation. This takes the 2x2 area of pixels around the point where your ideal sample would be, and calculates a new value based on how close its position is to each of the 4 pixels. It doesn't work well if you're reducing below 2:1 because it starts to resemble nearest neighbor.
  3. Bicubic interpolation. Similar to Bilinear but using a 3x3 area with a more complex formula to get sharper results. Again not good below 2:1.
  4. Pixel averaging. If this isn't done with an integer multiple of input to output you'll be averaging different amounts each time and the results will be uneven.
  5. Lanczos filtering. This takes a number of pixels from the input and runs them through a modified version of Sinc that attempts to retain as much of the detail as possible while keeping the calculations tractable. The size and speed of the filter varies with the resizing ratio. It's slow, but not as slow as Sinc.
  6. Sinc filtering. This is theoretically perfect, but it requires processing a large chunk of input for every pixel output so it's very slow. You may also notice the difference between theory and practice when you see ringing artifacts in the output.
like image 168
Mark Ransom Avatar answered Oct 25 '25 04:10

Mark Ransom