Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How tensorflow deals with large Variables which can not be stored in one box

Tags:

tensorflow

I want to train a DNN model by training data with more than one billion feature dimensions. So the shape of the first layer weight matrix will be (1,000,000,000, 512). this weight matrix is too large to be stored in one box.

By now, is there any solution to deal with such large variables, for example partition the large weight matrix to multiple boxes.

Update:

Thanks Olivier and Keveman. let me add more detail about my problem. The example is very sparse and all features are binary value: 0 or 1. The parameter weight looks like tf.Variable(tf.truncated_normal([1 000 000 000, 512],stddev=0.1))

The solutions kaveman gave seem reasonable, and I will update results after trying.

like image 528
Hanbin Zheng Avatar asked Jul 13 '16 12:07

Hanbin Zheng


People also ask

What are variables and placeholders in TensorFlow?

A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.

How do I save a variable in TensorFlow?

Saver() class. Remember that Tensorflow variables are only alive inside a session. So, you have to save the model inside a session by calling save method on saver object you just created.

Is tf variable trainable?

Save this question. Show activity on this post.

What is TensorFlow variable used for?

A TensorFlow variable is the recommended way to represent shared, persistent state your program manipulates. This guide covers how to create, update, and manage instances of tf. Variable in TensorFlow. Variables are created and tracked via the tf.


1 Answers

The answer to this question depends greatly on what operations you want to perform on the weight matrix.

The typical way to handle such a large number of features is to treat the 512 vector per feature as an embedding. If each of your example in the data set has only one of the 1 billion features, then you can use the tf.nn.embedding_lookup function to lookup the embeddings for the features present in a mini-batch of examples. If each example has more than one feature, but presumably only a handful of them, then you can use the tf.nn.embedding_lookup_sparse to lookup the embeddings.

In both these cases, your weight matrix can be distributed across many machines. That is, the params argument to both of these functions is a list of tensors. You would shard your large weight matrix and locate the shards in different machines. Please look at tf.device and the primer on distributed execution to understand how data and computation can be distributed across many machines.

If you really want to do some dense operation on the weight matrix, say, multiply the matrix with another matrix, that is still conceivable, although there are no ready-made recipes in TensorFlow to handle that. You would still shard your weight matrix across machines. But then, you have to manually construct a sequence of matrix multiplies on the distributed blocks of your weight matrix, and combine the results.

like image 73
keveman Avatar answered Nov 15 '22 07:11

keveman