I'm working on this colab notebook:
https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb
I'd like to replace the gnews swivel embeddings with the ELMo embeddings.
So, replace
model = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
with:
model = "https://tfhub.dev/google/elmo/2"
There is a cascade of things that change here, such as needing
tf.compat.v1.disable_eager_execution()
But I'm not understanding the graph shape I need to do this replacement successfully. Specifically, I'm seeing.
#model = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
model = "https://tfhub.dev/google/elmo/2"
elmo = hub.Module(model, trainable=True, name="{}_module".format("mymod"))
hub_layer = hub.KerasLayer(elmo,
# output_shape=[3,20],
# input_shape=(1,),
dtype=tf.string,
trainable=True)
hub_layer(train_examples[:3])
Produces
<tf.Tensor 'keras_layer_14/mymod_module_14_apply_default/truediv:0' shape=(3, 1024) dtype=float32>
This seems fine. But:
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
# First, I have to build, because I no longer have eager executon.
model.build(input_shape=(None,1024))
model.summary()
Then this gives:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-54-8786753617e4> in <module>()
4 model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
5
----> 6 model.build(input_shape=(None,1024))
7
8 model.summary()
18 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in internal_convert_to_tensor_or_indexed_slices(value, dtype, name, as_ref)
1381 raise ValueError(
1382 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
-> 1383 (dtypes.as_dtype(dtype).name, value.dtype.name, str(value)))
1384 return value
1385 else:
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("Placeholder_12:0", shape=(None, 1024), dtype=float32)'
What else is changing about the graph dimensions and how do I fix it?
The problem is that Keras is assuming the input to be float32
:
conversion requested dtype
string
for Tensor with dtypefloat32
You can tell that this is the input because of the name "Placeholder_12:0". Placeholder tensors are used for feeding data into the model.
The model hub_layer
expects a string input so all you need to do is add an Input
layer that specifies that
model = tf.keras.Sequential()
#add an input layer
model.add(tf.keras.layers.Input(shape=tuple(),dtype=tf.string))
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.build(input_shape=(None,1024))
model.summary()
Results in:
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
keras_layer (KerasLayer) (None, 1024) 93600852
_________________________________________________________________
dense (Dense) (None, 16) 16400
_________________________________________________________________
dense_1 (Dense) (None, 1) 17
=================================================================
Total params: 93,617,269
Trainable params: 16,417
Non-trainable params: 93,600,852
_________________________________________________________________
With your modifcations and the above modification I was able to train using the colab notebook.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With