Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to detect largest difference between images in OpenCV Python?

I'm working on a shooting simulator project where I have to detect bullet holes from images. I'm trying to differentiate two images so I can detect the new hole between the images, but its not working as expected. Between the two images, there are minor changes in the previous bullet holes because of slight movement between camera frames.

My first image is here

before.png

and the second one is here

after.png

I tried this code for checking differences

import cv2 
import numpy as np

before = cv2.imread("before.png") after = cv2.imread("after.png")
result = after - before
cv2.imwrite("result.png", result)

the result i'm getting in result.png is the image below

result.png

but this is not what i expected, i only want to detect new hole but it is showing diff with some pixels of previous image. The result I'm expecting is

expected.png

Please help me figure it out so it can only detect big differences.

Thanks in advance.

Any new idea will be appreciated.

like image 964
WatchMyApps Lab Avatar asked Sep 09 '25 14:09

WatchMyApps Lab


2 Answers

In order to find the differences between two images, you can utilize the Structural Similarity Index (SSIM) which was introduced in Image Quality Assessment: From Error Visibility to Structural Similarity. This method is already implemented in the scikit-image library for image processing. You can install scikit-image with pip install scikit-image.

Using the skimage.metrics.structural_similarity() function from scikit-image, it returns a score and a difference image, diff. The score represents the structual similarity index between the two input images and can fall between the range [-1,1] with values closer to one representing higher similarity. But since you're only interested in where the two images differ, the diff image what you're looking for. The diff image contains the actual image differences between the two images.

Next, we find all contours using cv2.findContours() and filter for the largest contour. The largest contour should represent the new detected difference as slight differences should be smaller then the added bullet.


Here is the largest detected difference between the two images

enter image description here

Here is the actual differences between the two images. Notice how all of the differences were captured but since a new bullet is most likely the largest contour, we can filter out all the other slight movements between camera frames.

enter image description here

Note: this method works pretty well if we assume that the new bullet will have the largest contour in the diff image. If the newest hole was smaller, you may have to mask out the existing regions and whatever new contours in the new image would be the new hole (assuming the image will be a uniform black background with white holes).

from skimage.metrics import structural_similarity
import cv2

# Load images
image1 = cv2.imread('1.png')
image2 = cv2.imread('2.png')

# Convert to grayscale
image1_gray = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
image2_gray = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)

# Compute SSIM between the two images
(score, diff) = structural_similarity(image1_gray, image2_gray, full=True)

# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type in the range [0,1] 
# so we must convert the array to 8-bit unsigned integers in the range
# [0,255] image1 we can use it with OpenCV
diff = (diff * 255).astype("uint8")
print("Image Similarity: {:.4f}%".format(score * 100))

# Threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]

contour_sizes = [(cv2.contourArea(contour), contour) for contour in contours]
result = image2.copy()
# The largest contour should be the new detected difference
if len(contour_sizes) > 0:
    largest_contour = max(contour_sizes, key=lambda x: x[0])[1]
    x,y,w,h = cv2.boundingRect(largest_contour)
    cv2.rectangle(result, (x, y), (x + w, y + h), (36,255,12), 2)

cv2.imshow('result', result)
cv2.imshow('diff', diff)
cv2.waitKey()

Here's another example with different input images. SSIM is pretty good for detecting differences between images

enter image description here enter image description here

enter image description here enter image description here

like image 121
nathancy Avatar answered Sep 12 '25 04:09

nathancy


This is my approach: after we subtract one from the other, there is some noise still remaining, so I just tried to remove that noise. I am dividing the image on a percentile of it's size, and, for each small section of the image, comparing between before and after, so that only significant chunks of white pixels are remaining. This algorithm lacks precision when there is occlusion, that is, whenever the new shot overlaps an existing one.

import cv2 
import numpy as np

# This is the percentage of the width/height we're gonna cut
# 0.99 < percent < 0.1
percent = 0.01 

before = cv2.imread("before.png")
after = cv2.imread("after.png")

result =  after - before # Here, we eliminate the biggest differences between before and after

h, w, _ = result.shape

hPercent = percent * h
wPercent = percent * w

def isBlack(crop): # Function that tells if the crop is black
    mask = np.zeros(crop.shape, dtype = int)
    return not (np.bitwise_or(crop, mask)).any()

for wFrom in range(0, w, int(wPercent)): # Here we are gonna remove that noise
    for hFrom in range(0, h, int(hPercent)):
        wTo = int(wFrom+wPercent)
        hTo = int(hFrom+hPercent)
        crop = result[wFrom:wTo,hFrom:hTo] # Crop the image

        if isBlack(crop): # If it is black, there is no shot in it
            continue    # We dont need to continue with the algorithm

        beforeCrop = before[wFrom:wTo,hFrom:hTo] # Crop the image before

        if  not isBlack(beforeCrop): # If the image before is not black, it means there was a hot already there
            result[wFrom:wTo,hFrom:hTo] = [0, 0, 0] # So, we erase it from the result

cv2.imshow("result",result )
cv2.imshow("before", before)
cv2.imshow("after", after)
cv2.waitKey(0)

Before After Result As you can see, it worked for the use case you provided. A good next step is to keep an array of positions of shots, so that you can

like image 45
Pastre Avatar answered Sep 12 '25 05:09

Pastre



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!