I have a question about the "readiness" approach to asynchronous scheduling and execution in Rust, as it relates to (and contrasts with) the "completion" approach of runtime-based languages (Python, Node.js). I am using "readiness" and "completion" following the terminology of the blog post which inspired this question.
If I am getting it correctly, Rust futures (as of std-futures) implemented under the "readiness" approach:
The main difference -- and that which supposedly explains why Rust futures are more efficient than their runtime-based counterparts -- is that Futures don't automatically pass their computed value one level up the chain of Futures by calling the next Tasks . Instead, it is up to the last successor in a chain of Tasks to trigger all its predecessors to poll each other in cascade, computing the last Future's value as fits. So it would seem that the main advantage of the "readiness" approach in Rust resembles the main advantage of a queue: as long as you consume your Futures quickly enough, data doesn't accumulate in the stack and memory consumption remains low.
And now finally the question: If this model is more efficient than a model where Futures do create their own bubbling-up without waiting for a last-successor-poll, why isn't the approach followed in Python or Nodejs? Intuitively it would seem that the advantage of the approach translates well to runtime-based languages. Do these languages turn to the "completion" alternative because they cannot consume their Future's data fast enough?
I don't think you have correctly characterized the differences between the two models, or the primary benefit which led Rust to chose the 'readiness' model.
The fundamental difference is (as described on the tokio web site):
The rust asynchronous model is pull based. Instead of a Future being responsible for pushing the data into a callback, it relies on something else asking if it is complete or not.
One of the key aims of the design is that it should present a zero cost abstraction - which requires that writing asynchronous code using futures should have performance characteristics as good as could be achieved by hand coding your own event loop and state machine based solution.
The choice of the readiness based model flows from this requirement: if a completion based model were to be used, composition of futures would require the corresponding callbacks to be heap allocated (the blog article you linked shows why this is so). Heap allocations are expensive, and the goal of a zero cost abstraction would not be met.
Using the readiness based model, heap allocations can be minimized or even avoided altogether (especially beneficial for some embedded environments, where there is no heap).
There are other benefits to the model - one of which being the ease of implementing backpressure which you cited as the key benefit - but I don't think they were the deciding factor when choosing this model.
Having gotten that out of the way:
And now finally the question: If this model is more efficient than a model where Futures do create their own bubbling-up without waiting for a last-successor-poll, why isn't the approach followed in Python or Nodejs?
High level languages do not tend to provide the low level capabilities needed to provide zero cost abstractions. For example, the Javascript language specification does not even talk in terms of a stack or a heap, and although some JS implementations do optimize local variables onto the stack, programmers have no control over that. When choosing a model for asynchronous programming, the ability to compose futures without heap allocations does not come into the equation.
That is not to say some of the other benefits might not turn out to be useful in other programming environments - it could be they would - but since most these languages have already settled on a completion based model, and since they do not have the same driver of a zero cost abstraction, there might not be an appetite to revisit the design.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With