Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the difference between np.mean and tf.reduce_mean?

In the MNIST beginner tutorial, there is the statement

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) 

tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean?

Here is the doc on tf.reduce_mean:

reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

input_tensor: The tensor to reduce. Should have numeric type.

reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions.

# 'x' is [[1., 1. ]] #         [2., 2.]] tf.reduce_mean(x) ==> 1.5 tf.reduce_mean(x, 0) ==> [1.5, 1.5] tf.reduce_mean(x, 1) ==> [1.,  2.] 

For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don't understand what's happening in tf.reduce_mean(x, 1) ==> [1., 2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1, 2] and [1, 2] is [1.5, 1.5], but what's going on with tf.reduce_mean(x, 1)?

like image 649
O.rka Avatar asked Dec 12 '15 00:12

O.rka


People also ask

What does TF Reduce_mean do?

math. reduce_mean. Computes the mean of elements across dimensions of a tensor.

What does TF mean in python?

tf. function is a decorator function provided by Tensorflow 2.0 that converts regular python code to a callable Tensorflow graph function, which is usually more performant and python independent. It is used to create portable Tensorflow models.

What does the axis parameter of TF Expand_dims do?

expand_dims() is used to insert an addition dimension in input Tensor. Parameters: input: It is the input Tensor. axis: It defines the index at which dimension should be inserted.

What is TF Where?

tf. where will return the indices of condition that are non-zero, in the form of a 2-D tensor with shape [n, d] , where n is the number of non-zero elements in condition ( tf. count_nonzero(condition) ), and d is the number of axes of condition ( tf. rank(condition) ).


1 Answers

The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example,

c = np.array([[3.,4], [5.,6], [6.,7]]) print(np.mean(c,1))  Mean = tf.reduce_mean(c,1) with tf.Session() as sess:     result = sess.run(Mean)     print(result) 

Output

[ 3.5  5.5  6.5] [ 3.5  5.5  6.5] 

Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.

Now what are the differences between them?

You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session.

Lets look at another example.

npMean = np.mean(c) print(npMean+1)  tfMean = tf.reduce_mean(c) Add = tfMean + 1 with tf.Session() as sess:     result = sess.run(Add)     print(result) 

We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can't do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn't compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean().

I hope it makes sense.

like image 132
Shubhashis Avatar answered Oct 13 '22 19:10

Shubhashis