The book Structure and Interpretation of Computer Programs introduces a memoizing procedure as follows:
(define (memo-proc proc)
(let ((already-run? false) (result false))
(lambda ()
(if (not already-run?)
(begin (set! result (proc))
(set! already-run? true)
result)
result))))
while suggesting the definition of delayed execution, (delay <exp>)
be (memo-proc (lambda () <exp>))
. A delayed object can be forced using the following procedure:
(define (force delayed-object)
(delayed-object))
In the chapter which gives these definitions, computations are done using streams. Each stream is a pair with a head, and a delayed tail.
However, I don't see how memoization is achieved without using a lookup table, or some other kind of data structure which accumulates the previous calls.
The chapter which includes these definitions are here.
A lookup table would be useful when we are memoizing a single procedure with various arguments. Then, we would put arguments as keys and the result of the procedure as values. In this case, we have nullary procedures hence we don't need a lookup table. All we need would be a set of delayed objects we have created so far. However, closures are used here.
According to the definition of memo-proc
above, each delayed object actually is a closure with values already-run?
and result
, which are both initialised to false
. However, since each delayed object inside the call proc
would have their own closures with their own local already-run?
and result
, modifying them wouldn't change the other ones in the call tree. Hence, I'm feeling like a memoized value won't ever be read by any other procedures again.
So my question is, what am I thinking wrong or what is the correct explanation of how everything works?
In programming, memoization is an optimization technique that makes applications more efficient and hence faster. It does this by storing computation results in cache, and retrieving that same information from the cache the next time it's needed instead of computing it again.
It helps avoid waste by removing the need to recalculate values that have already been produced as part of a previous call. The benefits of memoization will be less apparent in functions that are simple to begin with or infrequently called.
Lazy Evaluation using Python The range method in Python follows the concept of Lazy Evaluation. It saves the execution time for larger ranges and we never require all the values at a time, so it saves memory consumption as well.
The term "memoization" was coined by Donald Michie in 1968 and is derived from the Latin word "memorandum" ("to be remembered"), usually truncated as "memo" in American English, and thus carries the meaning of "turning [the results of] a function into something to be remembered".
Each delayed object stores its own value after the first time it's forced; the value is stored in the result
variable. It works because memo-proc
creates a closure over both proc
- the promise (a no-arg procedure waiting to be evaluated) and the result
variable.
After the first time the object is forced the stored result
is returned, and never again is recalculated. So the promise becomes the value itself.
I don't see how memoization is achieved without using a lookup table, or some other kind of data structure which accumulates the previous calls.
The data structure is the closure created around each promise, it accumulates the value of the first call in the result
variable, returning it for all subsequent calls.
Hence, I'm feeling like a memoized value won't ever be read by any other procedures again
Yes, it'll be read each and every time that force
is called on the promise object.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With