In CUDA ConvNet, we can specify the neuron activation function to be linear by writing neuron=linear[a,b]
, such that f(x) = ax + b
.
How can I achieve the same result in TensorFlow?
The most basic way to write a linear activation in TensorFlow is using tf.matmul()
and tf.add()
(or the +
operator). Assuming you have a matrix of outputs from the previous layer (let's call it prev_layer
) with size batch_size
x prev_units
, and the size of the linear layer is linear_units
:
prev_layer = …
linear_W = tf.Variable(tf.truncated_normal([prev_units, linear_units], …))
linear_b = tf.Variable(tf.zeros([linear_units]))
linear_layer = tf.matmul(prev_layer, linear_W) + linear_b
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With