Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to add Tensorboard to a Tensorflow estimator process

I have taken the supplied Abalone example and made sure I have understood it.... Well I think I do. But as another estimator project I am working on is producing total garbage - I have tried to add tensor board, so I can understand what is going on.

The Base code is https://www.tensorflow.org/extend/estimators

I had added a Session and a writer

    # Set model params
    model_params = {"learning_rate": 0.01}
    with  tf.Session ()   as  sess: 
        # Instantiate Estimator
        nn = tf.contrib.learn.Estimator(model_fn=model_fn, params=model_params)
        writer  =  tf.summary.FileWriter ( '/tmp/ab_tf' ,  sess.graph)
        nn.fit(x=training_set.data, y=training_set.target, steps=5000)   
        # Score accuracy
        ev = nn.evaluate(x=test_set.data, y=test_set.target, steps=1)


And added 1 line in the model_fn function so it looks like this...


def model_fn(features, targets, mode, params):
  """Model function for Estimator."""

  # Connect the first hidden layer to input layer
  # (features) with relu activation
  first_hidden_layer = tf.contrib.layers.relu(features, 49)

  # Connect the second hidden layer to first hidden layer with relu
  second_hidden_layer = tf.contrib.layers.relu(first_hidden_layer, 49)

  # Connect the output layer to second hidden layer (no activation fn)
  output_layer = tf.contrib.layers.linear(second_hidden_layer, 1)

  # Reshape output layer to 1-dim Tensor to return predictions
  predictions = tf.reshape(output_layer, [-1])
  predictions_dict = {"ages": predictions}

  # Calculate loss using mean squared error
  loss = tf.losses.mean_squared_error(targets, predictions)

  # Calculate root mean squared error as additional eval metric
  eval_metric_ops = {
      "rmse": tf.metrics.root_mean_squared_error(
          tf.cast(targets, tf.float64), predictions)
  }

  train_op = tf.contrib.layers.optimize_loss(
      loss=loss,
      global_step=tf.contrib.framework.get_global_step(),
      learning_rate=params["learning_rate"],
      optimizer="SGD")


  tf.summary.scalar('Loss',loss)

  return model_fn_lib.ModelFnOps(
      mode=mode,
      predictions=predictions_dict,
      loss=loss,
      train_op=train_op,
      eval_metric_ops=eval_metric_ops)

Finally added a

writer.close()

When I run the code ... I get a data file in the /tmp/ab_tf, this file is NOT null. But it also is only 139 bytes in size ... which implies nothing is being written....

When I open this with tensor board - there is no data.

What am I doing wrong ?

Appreciate any input ...

like image 246
Tim Seed Avatar asked May 11 '17 06:05

Tim Seed


2 Answers

Actually, you don't need to setup a summary writer for the estimator. The summary log will be written to model_dir of the estimator.

let's say your model_dir for estimator is './tmp/model', you can view the summary by using tensorboard --logdir=./tmp/model

like image 127
jamescheuk Avatar answered Oct 14 '22 02:10

jamescheuk


I was trying to do exactly the same thing as you. I finally figured out that you need to pass model_dir as a parameter to the class constructor like this:

# Instantiate Estimator
nn = tf.contrib.learn.Estimator(model_fn=model_fn,
        params=model_params, 
        model_dir=FLAGS.log_dir)

You can see this documented in the TensorFlow API here: https://www.tensorflow.org/api_docs/python/tf/contrib/learn/Estimator

like image 24
eibarra Avatar answered Oct 14 '22 01:10

eibarra