Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Tensorflow vs Numpy math functions

Is there any real difference between the math functions performed by numpy and tensorflow. For example, exponential function, or the max function?

The only difference I noticed is that tensorflow takes input of tensors, and not numpy arrays. Is this the only difference, and no difference in the results of the function, by value?

like image 692
Blue Avatar asked Jul 31 '17 08:07

Blue


People also ask

Is NumPy better than math?

The short answer: use math if you are doing simple computations with only with scalars (and no lists or arrays). Use numpy if you are doing scientific computations with matrices, arrays, or large datasets.

What is difference between NumPy and TensorFlow?

Tensorflow is a library for artificial intelligence, especially machine learning. Numpy is a library for doing numerical calculations.

Does NumPy have math?

Quite understandably, NumPy contains a large number of various mathematical operations. NumPy provides standard trigonometric functions, functions for arithmetic operations, handling complex numbers, etc.

Is NumPy faster than TensorFlow?

In the second approach I calculate variance via other Tensorflow functions. I tried CPU-only and GPU; numpy is always faster. I used time. time() rather than time.


2 Answers

As has been mentioned, there is the performance difference. TensorFlow has the advantage that it has been designed to work on both CPUs or GPUs, so if you have a CUDA-enabled GPU, chances are TensorFlow is going to be much faster. You can find several benchmarks on the web with different comparisons, and also with other packages such as Numba or Theano.

However, I think that you are talking about whether NumPy and TensorFlow operations are exactly equivalent. The answer is basically yes, that is, the meaning of the operations is the same. However, since they are completely separate libraries with different implementations for everything, you will find small differences in the results. Take this code, for example (TensorFlow 1.2.0, NumPy 1.13.1):

# Force TensorFlow to run on CPU only
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"

import numpy as np
import tensorflow as tf

# float32 NumPy array
a = np.arange(100, dtype=np.float32)
# The same array with the same dtype in TensorFlow
a_tf = tf.constant(a, dtype=tf.float32)
# Square root with NumPy
sqrt = np.sqrt(a)
# Square root with TensorFlow
with tf.Session() as sess:
    sqrt_tf = sess.run(tf.sqrt(a_tf))

You would expect to get pretty much the same output from both, I mean, a square root doesn't sound like an extremely complex operation after all. However, printing these arrays in my computer I get:

print(sqrt)
>>> array([ 0.        ,  1.        ,  1.41421354,  1.73205078,  2.        ,
            2.23606801,  2.44948983,  2.64575124,  2.82842708,  3.        ,
            3.1622777 ,  3.31662488,  3.46410155,  3.60555124,  3.7416575 ,
            3.87298346,  4.        ,  4.12310553,  4.2426405 ,  4.35889912,
            4.47213602,  4.5825758 ,  4.69041586,  4.79583168,  4.89897966,
            5.        ,  5.09901953,  5.19615221,  5.29150248,  5.38516474,
            5.47722578,  5.56776428,  5.65685415,  5.74456263,  5.83095169,
            5.91608   ,  6.        ,  6.08276272,  6.16441393,  6.24499798,
            6.3245554 ,  6.40312433,  6.48074055,  6.55743837,  6.63324976,
            6.70820379,  6.78233004,  6.85565472,  6.92820311,  7.        ,
            7.07106781,  7.14142847,  7.21110249,  7.28010988,  7.34846926,
            7.41619825,  7.48331499,  7.54983425,  7.6157732 ,  7.68114567,
            7.74596691,  7.81024981,  7.8740077 ,  7.93725395,  8.        ,
            8.06225777,  8.1240387 ,  8.18535233,  8.24621105,  8.30662346,
            8.36660004,  8.42614937,  8.48528099,  8.54400349,  8.60232544,
            8.66025448,  8.71779823,  8.77496433,  8.83176041,  8.88819408,
            8.94427204,  9.        ,  9.05538559,  9.11043358,  9.1651516 ,
            9.21954441,  9.2736187 ,  9.32737923,  9.38083172,  9.43398094,
            9.48683262,  9.53939247,  9.59166336,  9.64365101,  9.69536018,
            9.7467947 ,  9.79795933,  9.84885788,  9.89949512,  9.94987392], dtype=float32)

print(sqrt_tf)
>>> array([ 0.        ,  0.99999994,  1.41421342,  1.73205078,  1.99999988,
            2.23606801,  2.44948959,  2.64575124,  2.82842684,  2.99999976,
            3.1622777 ,  3.31662488,  3.46410155,  3.60555077,  3.74165726,
            3.87298322,  3.99999976,  4.12310553,  4.2426405 ,  4.35889864,
            4.47213602,  4.58257532,  4.69041538,  4.79583073,  4.89897919,
            5.        ,  5.09901857,  5.19615221,  5.29150248,  5.38516474,
            5.47722483,  5.56776428,  5.65685368,  5.74456215,  5.83095121,
            5.91607952,  5.99999952,  6.08276224,  6.16441393,  6.24499846,
            6.3245554 ,  6.40312433,  6.48074055,  6.5574379 ,  6.63324976,
            6.70820427,  6.78233004,  6.85565472,  6.92820311,  6.99999952,
            7.07106733,  7.14142799,  7.21110153,  7.28010893,  7.34846973,
            7.41619825,  7.48331451,  7.54983425,  7.61577368,  7.68114567,
            7.74596643,  7.81025028,  7.8740077 ,  7.93725395,  7.99999952,
            8.06225681,  8.12403774,  8.18535233,  8.24621105,  8.30662346,
            8.36660004,  8.42614937,  8.48528099,  8.54400253,  8.60232449,
            8.66025352,  8.71779728,  8.77496433,  8.83176041,  8.88819408,
            8.94427204,  8.99999905,  9.05538464,  9.11043262,  9.16515064,
            9.21954441,  9.27361774,  9.32737923,  9.38083076,  9.43398094,
            9.48683357,  9.53939152,  9.59166145,  9.64365005,  9.69535923,
            9.7467947 ,  9.79795837,  9.84885788,  9.89949417,  9.94987392], dtype=float32)

So, okay, it's similar, but there are obvious differences. TensorFlow couldn't even get right the square roots of 1, 4 or 9, for example. And you would probably get yet a different result if you run it on a GPU (due to the GPU kernels being different from the CPU kernels and the dependence on CUDA routines implemented by NVIDIA, another player in the field).

My impression (although I may be wrong) is that TensorFlow is more willing to sacrifice a bit of precision in exchange of performance (which would make sense considering its typical use case). I have even seen some more complicated operations to produce (very slightly) different results just running it twice (on the same hardware), probably due to unspecified order in aggregation and averaging operations causing rounding errors (I generally use float32, so that's a factor too I guess).

like image 68
jdehesa Avatar answered Nov 15 '22 00:11

jdehesa


Of course there is a real difference. Numpy works on arrays which can use highly optimized vectorized computations and it's doing pretty well on CPU whereas tensorflow's math functions are optimized for GPU where many matrix multiplications are much more important. So the question is where you want to use what. For CPU, I would just go with numpy whereas for GPU, it makes sense to use TF operations.

like image 40
Dat Tran Avatar answered Nov 15 '22 00:11

Dat Tran