Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Evaluating TF model inside a TF op throws error

I am using TensorFlow 2. I am trying to optimize a function which uses the loss of a trained tensorflow model (poison).

@tf.function
def totalloss(x):
    xt = tf.multiply(x, (1.0 - m)) + tf.multiply(m, d)
    label = targetlabel*np.ones(xt.shape[0])
    loss1 = poison.evaluate(xt, label, steps=1)
    loss2 = tf.linalg.norm(m, 1)
    return loss1 + loss2

I am not able to execute this function, however, when I comment the @tf.function line the function works!

I need to use this function as a tensorflow op so as to optimize 'm' & 'd'.

Value Error: Unknown graph. Aborting.

This is how I am defining the model and variables:

# mask
m = tf.Variable(tf.zeros(shape=(1, 784)), name="m")
d = tf.Variable(tf.zeros(shape=(1, 784)), name="d")
# target
targetlabel = 6
poison = fcn()
poison.load_weights("MNISTP.h5")
adam = tf.keras.optimizers.Adam(lr=.002, decay=1e-6)
poison.compile(optimizer=adam, loss=tf.losses.sparse_categorical_crossentropy)

This is how I am calling the function later: (Executing this line results in an error listed below. However if I comment off the @tf.function line, this command works!)

loss = totalloss(ptestdata)

This is the entire traceback call:

ValueError: in converted code:

    <ipython-input-52-4841ad87022f>:5 totalloss  *
        loss1 = poison.evaluate(xt, label, steps=1)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:746 evaluate
        use_multiprocessing=use_multiprocessing)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_arrays.py:693 evaluate
        callbacks=callbacks)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_arrays.py:187 model_iteration
        f = _make_execution_function(model, mode)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training_arrays.py:555 _make_execution_function
        return model._make_execution_function(mode)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:2034 _make_execution_function
        self._make_test_function()
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:2010 _make_test_function
        **self._function_kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:3544 function
        return EagerExecutionFunction(inputs, outputs, updates=updates, name=name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:3429 __init__
        raise ValueError('Unknown graph. Aborting.')

    ValueError: Unknown graph. Aborting. 
like image 621
Shantnav Avatar asked Jun 16 '19 02:06

Shantnav


1 Answers

The purpose of @tf.function decorator is to convert Tensorflow operations written in Python into Tensorflow graph to achieve better performance. The error might come when you tried to use a pre-trained model with a serialized graph. Thus, the decorator cannot make the graph-to-graph conversion.

I've reported this error here: https://github.com/tensorflow/tensorflow/issues/33997

A (temporary) solution is that your loss function should be separated into two small functions. The decorator should only be used in the function not including the pre-trained model. In this way, you still can achieve better performance in other operations but not with the part of using the pre-trained model.

For example:

@tf.function
def _other_ops(x):
    xt = tf.multiply(x, (1.0 - m)) + tf.multiply(m, d)
    label = targetlabel * np.ones(xt.shape[0])
    loss2 = tf.linalg.norm(m, 1)

    return xt, label, loss2

def total_loss(x):
    xt, label, loss2 = _other_ops(x)
    loss1 = poison.evaluate(xt, label, steps=1)

    return loss1 + loss2

Update:

According to the discussion in the above TF issue link, an elegant solution is to manually pass the input through each layer of the model. You could get a list of layers in your model by calling your_model.layers

In your case, you might calculate the loss from the prediction of your output with the label in the last layer. Thus, I think you should skip the last layer and calculate the loss outside of the loop:

@tf.function
def totalloss(x):
    xt = tf.multiply(x, (1.0 - m)) + tf.multiply(m, d)
    label = targetlabel*np.ones(xt.shape[0])

    feat = xt
    # Skip the last layer which calculates loss1
    for i in range(len(poison.layers) - 1):
        layer = poison.layers[i]
        feat = layer(feat)

    # Now, calculate loss by yourself
    loss1 = tf.keras.losses.sparse_categorical_crossentropy(feat, label)
    loss2 = tf.linalg.norm(m, 1)
    return loss1 + loss2

The way that the TF engineers explain for this issue is that a model might wrap high-level processing which does guarantee by the @tf.function. So, putting a model inside a function decorated with @tf.function is not recommended. Thus, we need to break the model to smaller pieces to bypass it.

like image 87
biendltb Avatar answered Oct 18 '22 16:10

biendltb