I'm trying to extract the weights from a model after training it. Here's a toy example
import tensorflow as tf
import numpy as np
X_ = tf.placeholder(tf.float64, [None, 5], name="Input")
Y_ = tf.placeholder(tf.float64, [None, 1], name="Output")
X = ...
Y = ...
with tf.name_scope("LogReg"):
pred = fully_connected(X_, 1, activation_fn=tf.nn.sigmoid)
loss = tf.losses.mean_squared_error(labels=Y_, predictions=pred)
training_ops = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(200):
sess.run(training_ops, feed_dict={
X_: X,
Y_: Y
})
if (i + 1) % 100 == 0:
print("Accuracy: ", sess.run(accuracy, feed_dict={
X_: X,
Y_: Y
}))
# Get weights of *pred* here
I've looked at Get weights from tensorflow model and at the docs but can't find a way to retrieve the value of the weights.
So in the toy example case, suppose that X_ has shape (1000, 5), how can I get the 5 values in the 1-layer weights after
To visualize the weights, you can use a tf. image_summary() op to transform a convolutional filter (or a slice of a filter) into a summary proto, write them to a log using a tf. train. SummaryWriter , and visualize the log using TensorBoard.
Dense(...) . Once you have a handle to this layer object, you can use all of its functionality. For obtaining the weights, just use obj. trainable_weights this returns a list of all the trainable variables found in that layer's scope.
There are some issues in your code that needs to be fixed:
1- You need to use variable_scope
instead of name_scope
at the following line (please refer to the TensorFlow documentation for difference between them):
with tf.name_scope("LogReg"):
2- To be able to retrieve a variable later in code, you need to know it's name. So, you need to assign a name to the variable of interest (if you don't support one, there will be a default one assigned, but then you need to figure out what it is!):
pred = tf.contrib.layers.fully_connected(X_, 1, activation_fn=tf.nn.sigmoid, scope = 'fc1')
Now let's see how the above fixes can help us to get a variable's value. Each layer has two types of variables: weights and biases. In the following code snippet (a modified version of yours) I will only show how to retrieve the weights for the fully connected layer:
X_ = tf.placeholder(tf.float64, [None, 5], name="Input")
Y_ = tf.placeholder(tf.float64, [None, 1], name="Output")
X = np.random.randint(1,10,[10,5])
Y = np.random.randint(0,2,[10,1])
with tf.variable_scope("LogReg"):
pred = tf.fully_connected(X_, 1, activation_fn=tf.nn.sigmoid, scope = 'fc1')
loss = tf.losses.mean_squared_error(labels=Y_, predictions=pred)
training_ops = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
with tf.Session() as sess:
all_vars= tf.global_variables()
def get_var(name):
for i in range(len(all_vars)):
if all_vars[i].name.startswith(name):
return all_vars[i]
return None
fc1_var = get_var('LogReg/fc1/weights')
sess.run(tf.global_variables_initializer())
for i in range(200):
_,fc1_var_np = sess.run([training_ops,fc1_var], feed_dict={
X_: X,
Y_: Y
})
print fc1_var_np
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With