I use the data augmentation according to the official TensorFlow tutorial. First, I create a sequential model with augmenting layers:
def _getAugmentationFunction(self):
if not self.augmentation:
return None
pipeline = []
pipeline.append(layers.RandomFlip('horizontal_and_vertical'))
pipeline.append(layers.RandomRotation(30))
pipeline.append(layers.RandomTranslation(0.1, 0.1, fill_mode='nearest'))
pipeline.append(layers.RandomBrightness(0.1, value_range=(0.0, 1.0)))
model = Sequential(pipeline)
return lambda x, y: (model(x, training=True), y)
Then, I use the map function on the dataset:
data_augmentation = self._getAugmentationFunction()
self.train_data = self.train_data.map(data_augmentation,
num_parallel_calls=AUTOTUNE)
The code works as expected but I get the following warning:
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
WARNING:tensorflow:Using a while_loop for converting ImageProjectiveTransformV3
WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting Bitcast
WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2
What is the reason of the warnings and how to fix it? I'm using TF v2.9.1
It's not only warnings - these layers are extremly slow! In my case, the time for one epoch went up from 30 seconds to several minutes.
This seems to be a bug in keras version 2.9 and 2.10 (which is included in tensorflow): https://github.com/keras-team/keras-cv/issues/581
It works correctly with TF v2.8.3 - no error messages, and training is fast.
On my arch system – I have had installed TF by installing the python-tensorflow-opt-cuda package using pacman – I issued the following command which solved the issue:
python -m pip install tensorflow-gpu==2.8.3
Just updating this answer 6 months after, the bug is still in keras 2.11
i tried for more than 3 days changing my input pipeline to apply the layers directly or switch it to tf.image or something but i will save a lot of headches to people: just downgrade to tensorflow 2.8.3
It will save you a lot of time, plus there's something wrong with newer versions of tensorflow as well, not only in the keras layers but using image_dataset_from_directory in tf 2.11 is taking 3.30 mins to load a dataset that in version 2.8.3 takes 33 seconds
here's the root cause of those inefficiencys, i suggest waiting for 2.12 or next implementation if isn't resolved yet:
https://github.com/keras-team/keras-cv/issues/291
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With