Can someone give a example on how to use tensorboard visualize numpy array value?
There is a related question here, I don't really get it. Tensorboard logging non-tensor (numpy) information (AUC)
For example, If I have
for i in range(100):
foo = np.random.rand(3,2)
How can I keep tracking the distribution of foo using tensorboard for 100 iterations? Can someone give a code example? Thanks.
NumPy is used to work with arrays. The array object in NumPy is called ndarray. We can create a NumPy ndarray object by using the array () function.
Once you’ve installed TensorBoard, these utilities let you log PyTorch models and metrics into a directory for visualization within the TensorBoard UI. Scalars, images, histograms, graphs, and embedding visualizations are all supported for PyTorch models and tensors as well as Caffe2 nets and blobs.
If you’re using a more complicated setup, like a global Jupyter installation and kernels for different Conda/virtualenv environments, then you must ensure that the tensorboard binary is on your PATH inside the Jupyter notebook context.
Scalars, images, histograms, graphs, and embedding visualizations are all supported for PyTorch models and tensors as well as Caffe2 nets and blobs. The SummaryWriter class is your main entry to log data for consumption and visualization by TensorBoard. For example:
For simple values (scalar), you can use this recipe
summary_writer = tf.train.SummaryWriter(FLAGS.logdir)
summary = tf.Summary()
summary.value.add(tag=tagname, simple_value=value)
summary_writer.add_summary(summary, global_step)
summary_writer.flush()
As far as using array, perhaps you can add 6 values in a sequence, ie
for value in foo:
summary.value.add(tag=tagname, simple_value=value)
Another (simplest) way is just using placeholders. First, you can make a placeholder for your numpy array shape.
# Some place holders for summary
summary_reward = tf.placeholder(tf.float32, shape=(), name="reward")
tf.summary.scalar("reward", summary_reward)
Then, just call session.run the merged summary with the feed_dict.
# Summary
summ = tf.summary.merge_all()
...
s = sess.run(summ, feed_dict={summary_reward: reward})
writer.add_summary(s, i)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With