I've trained an autoencoder using lasagne/nolearn. Suppose the network layers are [500, 100, 100, 500]. I've trained the neural net like so:
net.fit(X, X)
I want to do something like the following:
net.predict(X, layer=2)
so I'll get the suppressed representation of my data. So, if my initial data have a shape [10000, 500], the resulting data will be [10000, 100].
I searched but could not find how to do that. Is it possible with lasagne/nolearn?
Looks like the answer is here in the documentation: http://lasagne.readthedocs.org/en/latest/user/layers.html#propagating-data-through-layers
Here are the relevant parts:
To compute the output of a network, you should instead call
lasagne.layers.get_output()
on it. This will traverse the network graph.You can call this function with the layer you want to compute the output expression for:
>>> y = lasagne.layers.get_output(l_out)
In that case, a Theano expression will be returned that represents the output in function of the input variables associated with the
lasagne.layers.InputLayer
instance (or instances) in the network...
You can also specify a Theano expression to use as input as a second argument to
lasagne.layers.get_output()
:>>> x = T.matrix('x') >>> y = lasagne.layers.get_output(l_out, x) >>> f = theano.function([x], y)
Assuming net
is of type nolearn.lasagne.NeuralNet
it looks like you can get access to the the underlying layer objects with net.get_all_layers()
. I don't see it in the documentation but it's here on line 592.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With