Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TypeError: 'NoneType' object is not callable Tensorflow

Currently working on a regression problem with tf2.0. In order to prepare my dataset, I have used the following code:

train = tf.data.Dataset.from_tensor_slices(([train_X], [train_y])).batch(BATCH_SIZE).repeat()
val = tf.data.Dataset.from_tensor_slices(([val_X], [val_y])).batch(BATCH_SIZE).repeat() 

Now if we look at their shapes:

<RepeatDataset shapes: ((None, 42315, 20), (None, 42315)), types: (tf.float64, tf.float64)>
<RepeatDataset shapes: ((None, 2228, 20), (None, 2228)), types: (tf.float64, tf.float64)>

Which I believe are fairly right. Now if I run these through the model as shown below, they seem to train and work just fine:

simple_lstm_model = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(8),
    tf.keras.layers.Dense(1)
])

simple_lstm_model.compile(optimizer='adam', loss='mae')

history = simple_lstm_model.fit(train, epochs=EPOCHS,
                      steps_per_epoch=EVALUATION_INTERVAL,
                      validation_data=val, validation_steps=50)

However, when I make my model slightly more complicated and try to compile it, it gives me the error which is the heading of this question. Details about the error are at the very bottom of this question. The complicated model is shown below:

comp_lstm = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(64),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(1)
])

comp_lstm.compile(optimizer='adam', loss='mae')

history = comp_lstm.fit(train, 
                      epochs=EPOCHS,
                      steps_per_epoch=EVALUATION_INTERVAL,
                      validation_data=val, validation_steps=50)

I, infact, wanted to try a bidirectional LSTM but it seems like a multiple stack of LSTMs itself is giving me the issues as stated below.


The Error

TypeError                                 Traceback (most recent call last)
<ipython-input-21-8a86aab8a730> in <module>
      2 EPOCHS = 20
      3 
----> 4 history = comp_lstm.fit(train, 
      5                       epochs=EPOCHS,
      6                       steps_per_epoch=EVALUATION_INTERVAL,

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
     64   def _method_wrapper(self, *args, **kwargs):
     65     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 66       return method(self, *args, **kwargs)
     67 
     68     # Running inside `run_distribute_coordinator` already.

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
    846                 batch_size=batch_size):
    847               callbacks.on_train_batch_begin(step)
--> 848               tmp_logs = train_function(iterator)
    849               # Catch OutOfRangeError for Datasets of unknown size.
    850               # This blocks until the batch has finished executing.

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    578         xla_context.Exit()
    579     else:
--> 580       result = self._call(*args, **kwds)
    581 
    582     if tracing_count == self._get_tracing_count():

~/python_envs/p2/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    609       # In this case we have created variables on the first call, so we run the
    610       # defunned version which is guaranteed to never create variables.
--> 611       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    612     elif self._stateful_fn is not None:
    613       # Release the lock early so that multiple threads can perform the call

TypeError: 'NoneType' object is not callable
like image 783
SDG Avatar asked Jun 09 '20 09:06

SDG


People also ask

How do you handle TypeError NoneType object is not Subscriptable?

None always has no data and can not be subscriptable. In general, the error means that you attempted to index an object that doesn't have that functionality. You might have noticed that the method sort() that only modify the list have no return value printed – they return the default None.

What is TypeError str object is not callable?

The result was the TypeError: 'str' object is not callable error. This is happening because we are using a variable name that the compiler already recognizes as something different. To fix this, you can rename the variable to a something that isn't a predefined keyword in Python.

What is a NoneType in Python?

NoneType in Python is a data type that simply shows that an object has no value/has a value of None . You can assign the value of None to a variable but there are also methods that return None .

How do I fix NoneType object is not callable?

The Python "TypeError: 'NoneType' object is not callable" occurs when we try to call a None value as if it were a function. To solve the error, track down where the None value comes from and correct the assignment or remove the parenthesis.


1 Answers

Problem is that when you stack multiple LSTMs, we should use the argument, return_sequences = True in LSTM Layer.

It is because if return_sequences = False (default behavior), LSTM will return the Output of the Last Time Step. But when we stack LSTMs, we will need the Output of the Complete Sequence rather than just the Last Time Step.

Changing your Model to

comp_lstm = tf.keras.models.Sequential([
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64, return_sequences = True),
    tf.keras.layers.LSTM(64),
    tf.keras.layers.Dense(1)
])

should resolve the error.

This way, you can use Bi-Directional LSTMs as well.

Please let me know if you face any other error and I will be Happy to help you.

Hope this helps. Happy Learning!

like image 134
Tensorflow Warrior Avatar answered Oct 12 '22 01:10

Tensorflow Warrior