Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get summary information on tensorflow RNN

I implemented a simple RNN using tensorflow, shown below:

cell = tf.contrib.rnn.BasicRNNCell(state_size)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)

rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, batch_size, dypte=tf.float32)

This works fine. But I'd like to log the weight variables to summary writer. Is there any way to do this?

By the way, do we use tf.nn.rnn_cell.BasicRNNCell or tf.contrib.rnn.BasicRNNCell? Or are they identical?

like image 476
MoneyBall Avatar asked Feb 22 '18 13:02

MoneyBall


People also ask

What is summary in TensorFlow?

summary module provides APIs for writing summary data. This data can be visualized in TensorBoard, the visualization toolkit that comes with TensorFlow. See the TensorBoard website for more detailed tutorials about how to use these APIs, or some quick examples below.

How does an RNN generate text?

Generate textEach time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.

How do you understand RNN?

Recurrent neural networks (RNNs) are the state of the art algorithm for sequential data and are used by Apple's Siri and Google's voice search. It is the first algorithm that remembers its input, due to an internal memory, which makes it perfectly suited for machine learning problems that involve sequential data.

What is the output of a RNN?

Outputs and states A RNN layer can also return the entire sequence of outputs for each sample (one vector per timestep per sample), if you set return_sequences=True . The shape of this output is (batch_size, timesteps, units) .


1 Answers

But I'd like to log the weight variables to summary writer. Is there any way to do this?

You can get a variable via tf.get_variable() function. tf.summary.histogram accepts the tensor instance, so it'd be easier to use Graph.get_tensor_by_name():

n_steps = 2
n_inputs = 3
n_neurons = 5

X = tf.placeholder(dtype=tf.float32, shape=[None, n_steps, n_inputs])
basic_cell = tf.nn.rnn_cell.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)

with tf.variable_scope('rnn', reuse=True):
  print(tf.get_variable('basic_rnn_cell/kernel'))

kernel = tf.get_default_graph().get_tensor_by_name('rnn/basic_rnn_cell/kernel:0')
tf.summary.histogram('kernel', kernel)

By the way, do we use tf.nn.rnn_cell.BasicRNNCell or tf.contrib.rnn.BasicRNNCell? Or are they identical?

Yes, they are synonyms, but I prefer to use tf.nn.rnn_cell package, because everything in tf.contrib is sort of experimental and can be changed in 1.x versions.

like image 174
Maxim Avatar answered Sep 28 '22 04:09

Maxim