Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Implementing sparse connections in neural network (Theano)

Some use cases for neural networks requires that not all neurons are connected between two consecutive layers. For my neural network architecture, I need to have a layer, where each neuron only has connections to some prespecified neurons in the previous layer (at somewhat arbitrary places, not with a pattern such as a convolution layer). This is needed in order to model data on a specific graph. I need to implement this "Sparse" layer in Theano, but I'm not used to the Theano way of programming.

It seems that the most efficient way of programming sparse connections in Theano would be to use theano.tensor.nnet.blocksparse.SparseBlockGemv. An alternative would be to do matrix multiplication, where many weights are set to 0 (= no connection), but that would be very inefficient compared to SparseBlockGemv as each neuron is only connected to 2-6 neurons in the previous layer out of ~100000 neurons. Moreover, a weight matrix of 100000x100000 would not fit on my RAM/GPU. Could someone therefore provide an example of how to implement sparse connections using the SparseBlockGemv method or another computationally-efficient method?

A perfect example would be to extend the MLP Theano Tutorial with an extra layer after the hidden layer (and before softmax), where each neuron only has connections to a subset of neurons in the previous layer. However, other examples are also very welcome!

Edit: Note that the layer must be implemented in Theano as it is just a small part of a larger architecture.

like image 544
pir Avatar asked Apr 05 '16 22:04

pir


2 Answers

The output of a fully-connected layer is given by the dot product of the input and the weights of that layer. In theano or numpy you can use the dot method.

y = x.dot(w)

If you only have connections to some neurons in the previous layer and those connections are predefined you could do something like this:

y = [x[edges[i]].dot(w[i])) for i in neurons]

Where edges[i] contains the indices for neurons connected to neuron i and w[i] the weights of this connection.

Please note, that theano doesn't know about layers or other high-level details.

like image 155
basaundi Avatar answered Oct 10 '22 14:10

basaundi


Apologies for resurrecting an old thread, but this was the simplest guidance I found that was useful in extending the guidance at https://iamtrask.github.io/2015/07/12/basic-python-network/ for partially-connected inputs. However, it took me a while to make sense of basaundi's answer and I think I can improve upon it.

There were a couple of things that I needed to change to make it work. In my case, I am trying to map from N inputs to M neurons in my first hidden layer. My inputs are in a NxF array, where F is the number of features for my inputs, and my synapse values (weights) between inputs and the first layer are in a FxM array. Therefore, the output of Inputs <dot> Weights is a NxM array. My edge matrix is an MxF array that specifies for each neuron in layer 1 (rows), which of the features of the input data are relevant (columns).

In this setup, at least, it required me to slice my arrays differently than specified above. Also, the list comprehension returns a list of matrices, which must be summed to get the correct NxM (otherwise you get an MxNxM array).

So I have used the following (util.sigmoid is a helper function of my own):

y = [numpy.dot(x[:, edges[i]], w[edges[i]]) 
     for i in range(M)]
y = util.sigmoid(numpy.sum(y, 0))

This seems to work for me.

like image 1
bab Avatar answered Oct 10 '22 13:10

bab