After a recent upgrade, when running my PyTorch loop, I now get the warning
Using a non-full backward hook when the forward contains multiple autograd Nodes`".
The training still runs and completes, but I am unsure where I am supposed to place the register_full_backward_hook
function.
I have tried adding it to each of the layers in my neural network but this gives further errors about using different hooks.
Can anyone please advise?
PyTorch version 1.8.0 deprecated register_backward_hook
(source code) in favor of register_full_backward_hook
(source code).
You can find it in the patch notes here: Deprecated old style nn.Module
backward hooks (PR #46163)
The warning you're getting:
Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some
grad_input
. Please useregister_full_backward_hook
to get the documented behavior.
Simply indicates that you should replace all register_backward_hook
calls with register_full_backward_hook
in your code to get the behavior described in the documentation page.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With