Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When to use coroutines over iterators?

Let's say I have data that I want to provide 'one at a time'. There's lots of data, which is why I get one piece only when it's needed (to save on memory). So I can't store the whole data inside a std::vector.

Today I can use iterators for this, as they naturally fit this requirement. But I could also use coroutines (currently using the Coroutines TS). Using algorithms that only use iterators is not needed.

Is there any advantage of using coroutines over iterators in this case?

like image 981
Rakete1111 Avatar asked Oct 28 '22 21:10

Rakete1111


1 Answers

Iterators act as a kind of glue, allowing users to write algorithms that operate over sequences of values without knowing about how that sequence got there or is held. But the specific "glue" between the algorithm and the sequence is irrelevant. It only matters in that a specific implementation of an algorithm has to be implemented in terms of a specific kind of "glue".

The standard library iterator model is useful because the standard library comes with tools that use this model (algorithms, the iterator constructors of containers, range-based for, etc). If you're not actually using those mechanisms... then there is nothing objective to be gained by using the iterator model compared to any other model. You could just have an object that has a get_next function and a has_next, or some similar interface. They're all approximately equally efficient, and it's not hard to convert from one to the other.

Coroutines would only be useful in this regard to the degree that it simplifies the implementation of the operation. The code using the generation coroutine would have basically the same interface as it had before; it just internally is using co_yield and a stack frame that pauses and resumes.

Because the stack frame of a coroutine is an object that persists, you don't need to explicitly create a generation object. The function which generates values can use stack variables for its state, then co_yield values from that stack data as need. That would allow you to build a generalized generator framework that many distinct functions could use, thus creating some separation between the general interface for all generators and the specific code doing the generation.

like image 145
Nicol Bolas Avatar answered Nov 15 '22 06:11

Nicol Bolas