Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Too Much Memory Issue with Semantic Image Segmentation NN (DeepLabV3+)

I first explain my task: I have nearly 3000 images from two different ropes. They contain rope 1, rope 2 and the background. My Labels/Masks are images, where for example the pixel value 0 represents the background, 1 represents the first rope and 2 represents the second rope. You can see both the input picture and the ground truth/labels here on picture 1 and 2 below. Notice that my ground truth/label has only 3 values: 0, 1 and 2. My input picture is gray, but for DeepLab i converted it to a RGB Picture, because DeepLab was trained on RGB Pictures. But my converted picture still doesn't contain color.

This is an input picture for my Network This is the ground truth The raw color image

The idea of this task is that the Neural Network should learn the structure from ropes, so it can label ropes correctly even if there are knotes. Therfore the color information is not important, because my ropes have different color, so it is easy to use KMeans for creating the ground truth/labels.

For this task i choose a Semantic Segmentation Network called DeepLab V3+ in Keras with TensorFlow as Backend. I want to train the NN with my nearly 3000 images. The size of alle the images is under 100MB and they are 300x200 pixels. Maybe DeepLab is not the best choice for my task, because my pictures doesn't contain color information and the size of my pictures are very small (300x200), but i didn't find any better Semantic Segmentation NN for my task so far.

From the Keras Website i know how to load the Data with flow_from_directory and how to use the fit_generator method. I don't know if my code is logical correct...

Here are the links:

https://keras.io/preprocessing/image/

https://keras.io/models/model/

https://github.com/bonlime/keras-deeplab-v3-plus

My first question is:

With my implementation my graphic card used nearly all the memory (11GB). I don't know why. Is it possible, that the weights from DeepLab are that big? My Batchsize is default 32 and all my nearly 300 images are under 100MB big. I already used the config.gpu_options.allow_growth = True code, see my code below.

A general question:

Does somebody know a good semantic segmentation NN for my task? I don't need NN, which were trained with color images. But i also don't need NN, which were trained with binary ground truth pictures... I tested my raw color image(picture 3) with DeepLab, but the result label i got was not good...

Here is my code so far:

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"

import numpy as np
from model import Deeplabv3
import tensorflow as tf
import time
import tensorboard
import keras
from keras.preprocessing.image import img_to_array
from keras.applications import imagenet_utils
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard


config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)

from keras import backend as K
K.set_session(session)

NAME = "DeepLab-{}".format(int(time.time()))

deeplab_model = Deeplabv3(input_shape=(300,200,3), classes=3)

tensorboard = TensorBoard(log_dir="logpath/{}".format(NAME))

deeplab_model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=['accuracy'])

# we create two instances with the same arguments
data_gen_args = dict(featurewise_center=True,
                     featurewise_std_normalization=True,
                     rotation_range=90,
                     width_shift_range=0.1,
                     height_shift_range=0.1,
                     zoom_range=0.2)
image_datagen = ImageDataGenerator(**data_gen_args)
mask_datagen = ImageDataGenerator(**data_gen_args)

# Provide the same seed and keyword arguments to the fit and flow methods
seed = 1
#image_datagen.fit(images, augment=True, seed=seed)
#mask_datagen.fit(masks, augment=True, seed=seed)

image_generator = image_datagen.flow_from_directory(
    '/path/Input/',
    target_size=(300,200),
    class_mode=None,
    seed=seed)

mask_generator = mask_datagen.flow_from_directory(
    '/path/Label/',
    target_size=(300,200),
    class_mode=None,
    seed=seed)

# combine generators into one which yields image and masks
train_generator = zip(image_generator, mask_generator)

print("compiled")

#deeplab_model.fit(X, y, batch_size=32, epochs=10, validation_split=0.3, callbacks=[tensorboard])
deeplab_model.fit_generator(train_generator, steps_per_epoch= np.uint32(2935 / 32), epochs=10, callbacks=[tensorboard])

print("finish fit")
deeplab_model.save_weights('deeplab_1.h5')
deeplab_model.save('deeplab-1')

session.close()

Here is my code to test DeepLab (from Github):

from matplotlib import pyplot as plt
import cv2 # used for resize. if you dont have it, use anything else
import numpy as np
from model import Deeplabv3
import tensorflow as tf
from PIL import Image, ImageEnhance

deeplab_model = Deeplabv3(input_shape=(512,512,3), classes=3)
#deeplab_model = Deeplabv3()
img = Image.open("Path/Input/0/0001.png")
imResize = img.resize((512,512), Image.ANTIALIAS)
imResize = np.array(imResize)
img2 = cv2.cvtColor(imResize, cv2.COLOR_GRAY2RGB)

w, h, _ = img2.shape
ratio = 512. / np.max([w,h])
resized = cv2.resize(img2,(int(ratio*h),int(ratio*w)))
resized = resized / 127.5 - 1.
pad_x = int(512 - resized.shape[0])
resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)),mode='constant')
res = deeplab_model.predict(np.expand_dims(resized2,0))
labels = np.argmax(res.squeeze(),-1)
plt.imshow(labels[:-pad_x])
plt.show()
like image 284
Mob Avatar asked Feb 21 '19 11:02

Mob


2 Answers

First question: The DeepLabV3+ is a very large model (I assume you are using the Xception backbone?!) and 11 GB of needed GPU capacity is totally normal regarding a bachsize of 32 with 200x300 pixels :) (Training DeeplabV3+, I needed approx. 11 GB using a batchsize of 5 with 500x500 pixels). One note to the second sentence of your question: the needed GPU resources are influenced by many factors (model, optimizer, batchsize, image crop, preprocessing etc) but the actual size of your dataset set shouldn't influence it. So it doesn't matter if your dataset is 300MB or 300GB large.

General Question: You are using a small dataset. Choosing DeeplabV3+ & Xception might not be a good fit, since the model might be too large. This might lead to overfitting. If you haven't obtained satisfying results yet you might try a smaller network. If you want to stick to the DeepLab-framework you could switch the backbone from the Xception network to MobileNetV2 (In the official tensorflow version it is already implemented). Alternatively, you could try using a standalone network like the Inception network with a FCN head...

In each case it would be essential to use a pre-trained encoder with a well-trained feature representation. If you don't find a good initialization of your desired model based on grayscale input images, just use a model pre-trained on RGB images and extend the pre-training with a grayscale dataset (basically you can convert any big rgb dataset to be grayscale) and finetune the weights on the grayscale input before using your data.

I hope this helps! Cheers, Frank

like image 171
FranklynJey Avatar answered Oct 10 '22 13:10

FranklynJey


IBM's Large Model Support (LMS) library enables training of large deep neural networks that would normally exhaust GPU memory while training. LMS manages this over-subscription of GPU memory by temporarily swapping tensors to host memory when they are not needed.

Description - https://developer.ibm.com/components/ibm-power/articles/deeplabv3-image-segmentation-with-pytorch-lms/

Pytorch - https://github.com/IBM/pytorch-large-model-support

TensorFlow - https://github.com/IBM/tensorflow-large-model-support

like image 22
Abhi25t Avatar answered Oct 10 '22 14:10

Abhi25t