My network takes images of size 100 x 100
pixels. Therefore I have to resize the images of my dataset which are of different size. I want to be able to extract the largest central square region from a given image and then resize it to 100 x 100
.
To be more precisely, let's say an image has a width of 200
pixels and a height of 50
pixels. Then I want to extract the largest central square region which is in this example 50 x 50
followed by resizing the image to 100 x 100
pixels.
What is the right way to do that using Tensorflow? Right now I am using tf.image.resize_images()
which distorts the image and I want to get rid of that.
Sounds like crop_to_bounding_box
is doing what you need:
import tensorflow as tf
def crop_center(image):
h, w = image.shape[-3], image.shape[-2]
if h > w:
cropped_image = tf.image.crop_to_bounding_box(image, (h - w) // 2, 0, w, w)
else:
cropped_image = tf.image.crop_to_bounding_box(image, 0, (w - h) // 2, h, h)
return tf.image.resize_images(cropped_image, (100, 100))
I think this does what you want:
import tensorflow as tf
def crop_center_and_resize(img, size):
s = tf.shape(img)
w, h = s[0], s[1]
c = tf.minimum(w, h)
w_start = (w - c) // 2
h_start = (h - c) // 2
center = img[w_start:w_start + c, h_start:h_start + c]
return tf.image.resize_images(img, [size, size])
print(crop_center_and_resize(tf.zeros((80, 50, 3)), 100))
# Tensor("resize_images/Squeeze:0", shape=(100, 100, 3), dtype=float32)
There is also tf.image.crop_and_resize
, which can do both things in one go, but you have to use normalized image coordinates with that:
import tensorflow as tf
def crop_center_and_resize(img, size):
s = tf.shape(img)
w, h = s[0], s[1]
c = tf.minimum(w, h)
wn, hn = h / c, w / c
result = tf.image.crop_and_resize(tf.expand_dims(img, 0),
[[(1 - wn) / 2, (1 - hn) / 2, wn, hn]],
[0], [size, size])
return tf.squeeze(result, 0)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With