I'm writing a project where I need to transfer a set of similar images over the net. To speed things up, I thought about doing what most movie codecs do. having keyframes and then just send the changes.
Now, what I got is a set of BufferedImage
s so in an analogy to text file I basically just want to diff them and send the patch.
However I've never really worked with images before so if I will do this, it will be rather crappy.
So, what's the best way of implementing something like this, or is there already an good implementation for something like this?
I guess storing the images in a byte array and binary diff them wont be very effective.
Edit: I need to stream this the images. Edit2: It's not so much about the specifics of the implementation it's more: what is the most efficient idea for an algorithm. Like only work with 5px chunks and not ignore a px if it has only changed so little the eye won't notice (I can live with some quality loss)
The "diff" tool calculates the differences between two text files. That difference is called a patch. You can apply a patch to another file using the "patch" tool. diff and patch are intended to be used on text files. Files that are binary or manipulated by purpose-built applications, like .
Applying a DIFF File in the Command LineCopy the DIFF files to the root directory of your store. Open the terminal on the server or access the server remotely via SSH. Replace /path/to/cscart/root/directory with the actual path to the root directory of your store. Replace example.
A simplistic approach would be to do the equivalent of an XOR operation on the two images. This will reveal the pixels that are identical (will be zero) and the pixels that have changed (non-zero).
If you don't care for nearly imperceptible differences then alternatively, use a 'subtract' blend then right-shift to discard one or two bits difference.
You could then calculate the bounds (maybe a simple rectangle) and only transmit the delta. The delta will likely contain a lot of zeroes or at most bytes with few rightmost bits of difference—i.e., it will have low 'entropy' which means it should theoretically be highly compressible using contemporary compression algorithms.
On the receiving end, the reverse process is just as simple. Given the delta and the bounding box, decompress the delta, then apply it (XOR, or left-shift then add) to the affected region of the previous/existing image.
For a more sophisticated, lossless approach, look into how animated GIF/PNGs are animated and what algorithms are used to compute/encode the delta information between frames. See, for example, What's the best way to make an animated GIF using an algorithm?
For an even more sophisticated approach, when dealing with real-world imagery and if you're willing to go the lossy route—then you already hinted at it. Look at how video codecs encode/transmit frames, for example MPEG Video Encoding.
It goes without saying that because there's a tradeoff between complexity (of the encoding/decoding process) and the reduction in size of transmitted data at some point you'll have to decide whether the added overhead in computation on either end is worth the savings in transmission.
You can iterate over all the pixels of a BufferedImage using getRGB(int x, int y)
.
for (int x = 0; x < img.getWidth(); ++x)
{
for (int y = 0; y < img.getHeight(); ++y)
{
int oldARGB = oldImg.getRGB(x, y);
int newARGB = img.getRGB(x, y);
if (oldARGB != newARGB)
{
// handle the diffrence
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With