Sorry if I messed up the title, I didn't know how to phrase this. Anyways, I have a tensor of a set of values, but I want to make sure that every element in the tensor has a range from 0 - 255, (or 0 - 1 works too). However, I don't want to make all the values add up to 1 or 255 like softmax, I just want to down scale the values.
Is there any way to do this?
Thanks!
we can modify a tensor by using the assignment operator. Assigning a new value in the tensor will modify the tensor with the new value. Import the torch libraries and then create a PyTorch tensor. Access values of the tensor.
transpose(x, perm=[1, 0]) . As above, simply calling tf. transpose will default to perm=[2,1,0] . To take the transpose of the matrices in dimension-0 (such as when you are transposing matrices where 0 is the batch dimension), you would set perm=[0,2,1] .
You are trying to normalize the data. A classic normalization formula is this one:
normalize_value = (value − min_value) / (max_value − min_value)
The implementation on tensorflow will look like this:
tensor = tf.div(
tf.subtract(
tensor,
tf.reduce_min(tensor)
),
tf.subtract(
tf.reduce_max(tensor),
tf.reduce_min(tensor)
)
)
All the values of the tensor will be betweetn 0 and 1.
IMPORTANT: make sure the tensor has float/double values, or the output tensor will have just zeros and ones. If you have a integer tensor call this first:
tensor = tf.to_float(tensor)
Update: as of tensorflow 2, tf.to_float()
is deprecated and instead, tf.cast()
should be used:
tensor = tf.cast(tensor, dtype=tf.float32) # or any other tf.dtype, that is precise enough
According to the feature scaling in Wikipedia you can also try the Scaling to unit length:
It can be implemented using this segment of code:
In [3]: a = tf.constant([2.0, 4.0, 6.0, 1.0, 0])
In [4]: b = a / tf.norm(a)
In [5]: b.eval()
Out[5]: array([ 0.26490647, 0.52981293, 0.79471946, 0.13245323, 0. ], dtype=float32)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With