I am creating a pipeline for text recognition and I want to use Tensorflow Dtatasets to load the data through some preprocessing with OpenCV
I was following this tutorial https://www.tensorflow.org/guide/datasets#applying_arbitrary_python_logic_with_tfpy_func and I have this preprocessing function:
def preprocess(path, imgSize=(1024, 64), dataAugmentation=False):
img = cv2.imread(path, cv2.IMREAD_GRAYSCALE)
kernel = np.ones((3, 3), np.uint8)
th, img = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY_INV +
cv2.THRESH_OTSU)
img = cv2.dilate(img, kernel, iterations=1)
# create target image and copy sample image into it
(wt, ht) = imgSize
(h, w) = img.shape
fx = w / wt
fy = h / ht
f = max(fx, fy)
newSize = (max(min(wt, int(w / f)), 1),
max(min(ht, int(h / f)), 1)) # scale according to f (result at
least 1 and at most wt or ht)
img = cv2.resize(img, newSize)
# add random padding to fit the target size if data augmentation is true
# otherwise add padding to the right
if newSize[1] == ht:
if dataAugmentation:
padding_width_left = np.random.random_integers(0, wt-newSize[0])
img = cv2.copyMakeBorder(img, 0, 0, padding_width_left, wt-newSize[0]-padding_width_left, cv2.BORDER_CONSTANT, None, (0, 0))
else:
img = cv2.copyMakeBorder(img, 0, 0, 0, wt - newSize[0], cv2.BORDER_CONSTANT, None, (0, 0))
else:
img = cv2.copyMakeBorder(img, int(np.floor((ht - newSize[1])/2)), int(np.ceil((ht - newSize[1])/2)), 0, 0, cv2.BORDER_CONSTANT, None, (0, 0))
# transpose for TF
img = cv2.transpose(img)
return img
But if I use this
list_images = os.listdir(images_path)
image_paths = []
for i in range(len(list_images)):
image_paths.append("iam-database/images/" + list_images[i])
dataset = tf.data.Dataset.from_tensor_slices(image_paths)
dataset = dataset.map(lambda filename: tuple(tf.py_function(preprocess, [filename], [tf.uint8])))
print(dataset)
I get shape unknown and it seems that the preprocessing function is not parsed. What should I do?
AUTOTUNE , which will prompt the tf. data runtime to tune the value dynamically at runtime.
tensorflow_datasets ( tfds ) defines a collection of datasets ready-to-use with TensorFlow. Each dataset is defined as a tfds. core.
In order to run this preprocess function inside dataset API pipeline, you need to wrap it with tf.py_function
It's the successor for deprecated py_func
. Main difference is it can be palced on GPU and can work with eager tensors. You can read more in docs.
def preprocess(path, imgSize = (1024, 64), dataAugmentation = False):
path = path.numpy().decode("utf-8") # .numpy() retrieves data from eager tensor
img = cv2.imread(path)
...
return img
At this point img is a . The rest of the function is up to you
This parse function is a wrapper for dataset pipeline. It receives filename as tensor with bytestring inside.
def parse_func(filename):
out = tf.py_function(preprocess, [filename], tf.uint8)
return out
dataset = tf.data.Dataset.from_tensor_slices(path)
dataset = dataset.map(pf).batch(1)
iterator = dataset.make_one_shot_iterator()
sess = tf.Session()
print(sess.run(iterator.get_next()))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With