Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CuDNNLSTM: Failed to call ThenRnnForward

I am facing an issue when trying to use CuDNNLSTM instead of keras.layers.LSTM.

This is the error I am getting:

Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 2, 0, 0 , [num_layers, input_size, num_units, dir_count, seq_length, batch_size]: [1, 300, 512, 1, 5521, 128] [[{{node bidirectional_1/CudnnRNN_1}} = CudnnRNN[T=DT_FLOAT, _class=["loc:@train...NNBackprop"], direction="unidirectional", dropout=0, input_mode="linear_input", is_training=true, rnn_mode="lstm", seed=87654321, seed2=0, _device="/job:localhost/replica:0/task:0/device:GPU:0"](bidirectional_1/transpose_1, bidirectional_1/ExpandDims_1, bidirectional_1/ExpandDims_1, bidirectional_1/concat_1)]] [[{{node loss/mul/_75}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1209_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Also, I got this error in one of the runs:

InternalError: GPU sync failed

And the kernel kept dying after each run.

I only started getting this error when I tried to run it on a VM instance on google cloud with CuDNNLSTM.

my code is:

MAX_LEN = max(len(article) for article in X_train_tokens)
EMBEDDING_DIM=300
vocab_size = len(word_to_id)
classes = 2 
# Text input
text_input = Input(shape=(MAX_LEN,))
embedding = Embedding(vocab_size, EMBEDDING_DIM, input_length=MAX_LEN)(text_input)
x = Bidirectional(LSTM(512, return_sequences=False))(embedding)
pred = Dense(2, activation='softmax')(x)
model = Model(inputs=[text_input],outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='RMSprop',     metrics=['accuracy'])
batch_size = 128
generator = text_training_generator(batch_size)
steps = len(X_train)/ batch_size 

model.fit_generator(generator, steps_per_epoch=steps, verbose=True, epochs=10)

The model summary:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 5521)              0         
_________________________________________________________________
embedding_1 (Embedding)      (None, 5521, 300)         8099100   
_________________________________________________________________
bidirectional_1 (Bidirection (None, 1024)              3330048   
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 2050      
=================================================================
Total params: 11,431,198
Trainable params: 11,431,198
Non-trainable params: 0
_________________________________________________________________
like image 716
user3084192 Avatar asked Dec 29 '18 19:12

user3084192


2 Answers

Probably your are running out of memory on the gpu. Your network is very large with 11 million trainable parameters. Do you really need a 512*2 Output of your recurrent layer?

Furthermore your embedding_dim is also quite large, while your vocabulary is quite small with 5k words. I guess your network is too complex for your problem. I would suggest to try an embedding size of 32 and a LSTM size of 32 as a start. If your accuracy is still bad, you can increase complexity.

EMBEDDING_DIM = 32
Bidirectional(LSTM(32, return_sequences=False))(embedding)
like image 54
ixeption Avatar answered Sep 22 '22 11:09

ixeption


I was facing this issue too with my model and Tensorflow 2.4.1 recently; I have also found out it is reproducible with e.g. the model from the tutorial Text generation with an RNN. Running on the CPU (and consuming ~3 GB RAM), training fails on the GPU with 8 GB memory with the error

2021-02-12 18:45:48.482327: E tensorflow/stream_executor/dnn.cc:616] CUDNN_STATUS_EXECUTION_FAILED
in tensorflow/stream_executor/cuda/cuda_dnn.cc(1859): 'cudnnRNNForwardTraining( cudnn.handle(), rnn_desc.handle(), model_dims.max_seq_length, input_desc.handles(), input_data.opaque(), input_h_desc.handle(), input_h_data.opaque(), input_c_desc.handle(), input_c_data.opaque(), rnn_desc.params_handle(), params.opaque(), output_desc.handles(), output_data->opaque(), output_h_desc.handle(), output_h_data->opaque(), output_c_desc.handle(), output_c_data->opaque(), workspace.opaque(), workspace.size(), reserve_space.opaque(), reserve_space.size())'
2021-02-12 18:45:48.482405: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at cudnn_rnn_ops.cc:1521 : Internal: Failed to call ThenRnnForward with model config: [rnn_mode, rnn_input_mode, rnn_direction_mode]: 3, 0, 0 , [num_layers, input_size, num_units, dir_count, max_seq_length, batch_size, cell_num_units]: [1, 256, 1024, 1, 100, 32, 0] 

I also observed the GPU memory filling up to the limit on model.compile() call before the error.

I solved that by prohibiting the full GPU memory allocation by adding

gpu_devices = tf.config.experimental.list_physical_devices("GPU")
for device in gpu_devices:
    tf.config.experimental.set_memory_growth(device, True)

early enough in the script (e.g. after import tensorflow as tf). This instructs Tensorflow to allocate GPU memory on demand. With that, training runs on GPU, only consuming ~2.2 GB memory.

like image 37
hoefling Avatar answered Sep 20 '22 11:09

hoefling