I'm trying to modifying code from github:
I'm getting this error:
tf.enable_eager_execution must be called at program startup.
and I think its coming from these lines of code:
from __future__ import print_function
import make_dataset
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
Additionally the reason why I think I need eager execution is because in my with tf.session block:
with tf.Session() as sess:
# Run the initializer
sess.run(iterator.initializer)
for step in range(1, num_steps+1):
batch_x, batch_y = myDataset.batch(4)#line where error occurs
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
I get the error:
RuntimeError: dataset.__iter__() is only supported when eager execution is enabled
So guidance in deciding to change iterators or enable eager execution would be great.
Much Appreciated, Josh
The Eager Execution settings can only be changed by resetting the runtime.
To reset the runtime, in your menu, look for how to reset the runtime, in my case I had to navigate to Runtime->Reset all runtimes
and click Yes
.
It is a bit counter-intuitive because the iPython execution remembers which setting you chose the first time you executed the code. This means that if you instantiated Tensorflow with Eager Execution enabled, removing the code from that cell and running it again does not disable Eager Execution. Similarly, if you instantiated Tensorflow without Eager Execution enabled, adding code the enable Eager Execution to the cell block that imports Tensorflow and rerunning that cell would not enable Eager Execution.
The solution is to reset the runtime and update the code before running the cell. When you do this, Tensorflow will run with the appropriate setting of Eager Execution being enabled or disabled.
You might want to debug your myDataset
instead of using eager execution since the example you followed runs in graph mode. If your myDataset
is a tf.data.Dataset
object. Its batch
method will return a tf.data.Dataset
which cannot be unpacked into batch_x, batch_y
, i.e. dataset.__iter__()
is not supported in graph mode.
One option is to follow the tutorial in the guide. You can make_one_shot_iterator
or make_initializable_iterator
(which needs to be initialized through sess.run(iterator.initializer, ...)
) from the "batched" dataset. Then you can get each batch in the loop by batch_x, batch_y = iterator.get_next()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With