Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

doubts regarding batch size and time steps in RNN

In Tensorflow's tutorial of RNN: https://www.tensorflow.org/tutorials/recurrent . It mentions two parameters: batch size and time steps. I am confused by the concepts. In my opinion, RNN introduces batch is because the fact that the to-train sequence can be very long such that backpropagation cannot compute that long(exploding/vanishing gradients). So we divide the long to-train sequence into shorter sequences, each of which is a mini-batch and whose size is called "batch size". Am I right here?

Regarding time steps, RNN consists of only a cell (LSTM or GRU cell, or other cell) and this cell is sequential. We can understand the sequential concept by unrolling it. But unrolling a sequential cell is a concept, not real which means we do not implement it in unroll way. Suppose the to-train sequence is a text corpus. Then we feed one word each time to the RNN cell and then update the weights. So why do we have time steps here? Combining my understanding of the above "batch size", I am even more confused. Do we feed the cell one word or multiple words (batch size)?

like image 204
derek Avatar asked Jun 06 '17 04:06

derek


1 Answers

Batch size pertains to the amount of training samples to consider at a time for updating your network weights. So, in a feedforward network, let's say you want to update your network weights based on computing your gradients from one word at a time, your batch_size = 1. As the gradients are computed from a single sample, this is computationally very cheap. On the other hand, it is also very erratic training.

To understand what happen during the training of such a feedforward network, I'll refer you to this very nice visual example of single_batch versus mini_batch to single_sample training.

However, you want to understand what happens with your num_steps variable. This is not the same as your batch_size. As you might have noticed, so far I have referred to feedforward networks. In a feedforward network, the output is determined from the network inputs and the input-output relation is mapped by the learned network relations:

hidden_activations(t) = f(input(t))

output(t) = g(hidden_activations(t)) = g(f(input(t)))

After a training pass of size batch_size, the gradient of your loss function with respect to each of the network parameters is computed and your weights updated.

In a recurrent neural network (RNN), however, your network functions a tad differently:

hidden_activations(t) = f(input(t), hidden_activations(t-1))

output(t) = g(hidden_activations(t)) = g(f(input(t), hidden_activations(t-1)))

=g(f(input(t), f(input(t-1), hidden_activations(t-2)))) = g(f(inp(t), f(inp(t-1), ... , f(inp(t=0), hidden_initial_state))))

As you might have surmised from the naming sense, the network retains a memory of its previous state, and the neuron activations are now also dependent on the previous network state and by extension on all states the network ever found itself to be in. Most RNNs employ a forgetfulness factor in order to attach more importance to more recent network states, but that is besides the point of your question.

Then, as you might surmise that it is computationally very, very expensive to calculate the gradients of the loss function with respect to network parameters if you have to consider backpropagation through all states since the creation of your network, there is a neat little trick to speed up your computation: approximate your gradients with a subset of historical network states num_steps.

If this conceptual discussion was not clear enough, you can also take a look at a more mathematical description of the above.

like image 178
Uvar Avatar answered Oct 11 '22 23:10

Uvar