In javascript, iterators are allowed to have throw(error)
and return(value)
methods. return(value)
gives the iterator a chance to see value
, and is expected to return {value: value, done: true}
. throw(error)
gives the iterator a chance to see and potentially catch the error. If the error is caught, throw
should return the next value. If the exception is not caught, it should be equivalent to return(undefined)
. Here's an example of these mechanics in action:
function* test() {
try {
yield 1;
} catch(e) {}
const fromConsumer = yield 2;
yield fromConsumer;
// I know no way to access the value passed to return in a generator
}
const iter = test()[Symbol.iterator]();
console.log(iter.next());
console.log(iter.throw(new Error('catch me')));
console.log(iter.next(9));
console.log(iter.next());
console.log(iter.return(0));
(Same thing as a jsfiddle)
My question is: Why? Does anyone have a defensible use case for this reversal of control API for iterators? Under what conditions does it actually make sense for an iterator to handle an error incurred while consuming it? When would you want to pass a value to return given that it is certain to end the iterator and can't further influence the behavior of the iterator API?
I'll say that the one use case I'm aware of is redux-saga, where they have leveraged the inversion of control API of iterators seemingly as a poor man's async/await. If someone is familiar with design or usage of that tool, are there other benefits to their choice?
First, not all iterators have return and throw. The only thing that an iterator needs to have is next
. But it's true that the iterators created by generator functions have return
and throw
.
I'll say that the one use case I'm aware of is redux-saga, where they have leveraged the inversion of control API of iterators seemingly as a poor man's async/await.
I'd say that's backwards. Generators aren't a poor version of async/await; async/await is narrow use of generators. In fact, the old equivalents of async/await were literally implemented using generators, as found in libraries like co. Now that async/await is part of the core language, i don't know if the underlying c++ code is literally using generators, but historically and conceptually it's an outgrowth of generators that's tied to the narrow case of resolving promises.
With a library like redux-saga, not only can you get await-like syntax, but you also have support for more complicated things such as task cancellation, which is something async/await cannot do. Task cancellation depends on the return
method to work. Error handling is also supported, which is made possible by the throw
method.
So generators are are a very broad tool which, among other things, can be used to model asynchronous behaviors. Because they're broad, they come with a set of functions which are able to model more than just iteration. I think of generators as not just a way to iterate things, but to have a two way conversation between two pieces of code. Among the things you might want to communicate is "there's a problem" or "we're done here", which is what throw and return are for.
When would you want to pass a value to return given that it is certain to end the iterator and can't further influence the behavior of the iterator API?
Actually, it can still influence it. If the generator uses try/finally, .return()
will send the generator into the finally block, where it can run as much code as it wants. The generator can even yield while in this finally block, with all the same behaviors it had before the finally block. Continuing with redux saga as an example, this feature of generators is used to support rolling back in the case of cancellation
function* exampleSaga() {
try {
// start working on something, with at least one yield
} finally {
if (yield cancelled()) {
// roll back
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With