I am trying to use: train = optimizer.minimize(loss)
but the standard optimizers do not work with tf.float64
. Therefore I want to truncate my loss
from tf.float64
to only tf.float32
.
Traceback (most recent call last):
File "q4.py", line 85, in <module>
train = optimizer.minimize(loss)
File "/Library/Python/2.7/site-packages/tensorflow/python/training/optimizer.py", line 190, in minimize
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/Library/Python/2.7/site-packages/tensorflow/python/training/optimizer.py", line 229, in compute_gradients
self._assert_valid_dtypes([loss])
File "/Library/Python/2.7/site-packages/tensorflow/python/training/optimizer.py", line 354, in _assert_valid_dtypes
dtype, t.name, [v for v in valid_dtypes]))
ValueError: Invalid type tf.float64 for Add_1:0, expected: [tf.float32].
In Tensorflow 2, you can cast the datatype of a tensor to a new datatype by using the tf. cast function.
transpose(x, perm=[1, 0]) . As above, simply calling tf. transpose will default to perm=[2,1,0] . To take the transpose of the matrices in dimension-0 (such as when you are transposing matrices where 0 is the batch dimension), you would set perm=[0,2,1] .
The "tf. cast" function casts a tensor to new type. The operation "cast" support the data types of int32, int64, float16, float32, float64, complex64, complex128, bfloat16, uint8, uint16, uint32, uint64, int8, int16. Only the real part of "x" is returned in case of casting from complex types to real types.
The short answer is that you can convert a tensor from tf.float64
to tf.float32
using the tf.cast()
op:
loss = tf.cast(loss, tf.float32)
The longer answer is that this will not solve all of your problems with the optimizers. (The lack of support for tf.float64
is a known issue.) The optimizers require that all of the tf.Variable
objects that you are trying to optimize must also have type tf.float32
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With