Multiplication of sparse tensors with themselves or with dense tensors does not seem to work in TensorFlow. The following example
from __future__ import print_function
import tensorflow as tf
x = tf.constant([[1.0,2.0],
[3.0,4.0]])
y = tf.SparseTensor(indices=[[0,0],[1,1]], values=[1.0,1.0], shape=[2,2])
z = tf.matmul(x,y)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print(sess.run([x, y, z]))
fails with the error message
TypeError: Input 'b' of 'MatMul' Op has type string that does not match type
float32 of argument 'a'
Both tensors have values of type float32 as seen by evaluating them without the multiplication op. Multiplication of y with itself returns a similar error message. Multipication of x with itself works fine.
To Multiply the matrices, we first calculate transpose of the second matrix to simplify our comparisons and maintain the sorted order. So, the resultant matrix is obtained by traversing through the entire length of both matrices and summing the appropriate multiplied values.
In order to use sparse matrix in keras, you may look at this, just modify two parts, then keras will support both sparse matrix and dense matrix.
The 'multiply' function in Tensorflow is used to multiply the values element−wise in the matrix.
To perform element-wise multiplication, you should use the tf. multiply() method. To perform matrix multiplication, you should use the tf. matmul() method.
General-purpose multiplication for tf.SparseTensor
is not currently implemented in TensorFlow. However, there are three partial solutions, and the right one to choose will depend on the characteristics of your data:
If you have a tf.SparseTensor
and a tf.Tensor
, you can use tf.sparse_tensor_dense_matmul()
to multiply them. This is more efficient than the next approach if one of the tensors is too large to fit in memory when densified: the documentation has more guidance about how to decide between these two methods. Note that it accepts a tf.SparseTensor
as the first argument, so to solve your exact problem you will need to use the adjoint_a
and adjoint_b
arguments, and transpose the result.
If you have two sparse tensors and need to multiply them, the simplest (if not the most performant) way is to convert them to dense and use tf.matmul
:
a = tf.SparseTensor(...)
b = tf.SparseTensor(...)
c = tf.matmul(tf.sparse_tensor_to_dense(a, 0.0),
tf.sparse_tensor_to_dense(b, 0.0),
a_is_sparse=True, b_is_sparse=True)
Note that the optional a_is_sparse
and b_is_sparse
arguments mean that "a
(or b
) has a dense representation but a large number of its entries are zero", which triggers the use of a different multiplication algorithm.
For the special case of sparse vector by (potentially large and sharded) dense matrix multiplication, and the values in the vector are 0 or 1, the tf.nn.embedding_lookup
operator may be more appropriate. This tutorial discusses when you might use embeddings and how to invoke the operator in more detail.
For the special case of sparse matrix by (potentially large and sharded) dense matrix, tf.nn.embedding_lookup_sparse()
may be appropriate. This function accepts one or two tf.SparseTensor
objects, with sp_ids
representing the non-zero values, and the optional sp_weights
representing their values (which otherwise default to one).
Recently, tf.sparse_tensor_dense_matmul(...)
was added that allows multiplying a sparse matrix by a dense matrix.
https://www.tensorflow.org/versions/r0.9/api_docs/python/sparse_ops.html#sparse_tensor_dense_matmul
https://github.com/tensorflow/tensorflow/issues/1241
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With