My space leak happens in one of my personal project. But I don't want someone to solve it in my project. I want to understand it.
I reproduced my space leak by making up this algorithm:
u is a sequence defined by:
after this, u is defined: u(n) = u(n-5) + u(n-10) - u(n-15)
Easy to implement in haskell right?
import System.Environment (getArgs)
u = [1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1, 9, 1, 10, 1, 11]
++ zipWith3 go u' u'' u'''
where u' = drop 15 u
u'' = drop 10 u
u''' = drop 5 u
go a b c = a + b - c
main = do
args <- getArgs
let n = read $ args !! 0
putStrLn $ show $ u !! n
Unfortunately, this space leaks:
$ time ./algo 9999999
Stack space overflow: current size 8388608 bytes.
Use `+RTS -Ksize -RTS' to increase it.
1.17user 0.19system 0:01.37elapsed 100%CPU (0avgtext+0avgdata 865124maxresident)k
0inputs+0outputs (0major+215695minor)pagefaults 0swaps
It looks like haskell is caching the whole list, where as I want it to cache only the 20 last elements.
For example, here's my implementation in C:
#include <stdint.h>
#include <stdio.h>
int main(int argc, char **argv)
{
size_t cursor;
int64_t buffer[20] = {1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1, 9, 1, 10, 1, 11};
int n = atoi(argv[1]);
for (cursor = 20; cursor <= n; cursor++) {
buffer[cursor%20] = buffer[(cursor+20-5)%20] + buffer[(cursor+20-10)%20] - buffer[(cursor+20-15)%20];
}
printf("%d\n", buffer[n%20]);
return 0;
}
$ ./a.out 9999999
5000001
My C implementation uses O(n) time and O(1) space. But it looks like my haskell implementation is using O(n) space.
Why is Haskell able to figure it out for fibonnacci, but not for my made up sequence? What did I do wrong? How would you implement this algorithm in Haskell?
Well that is a stack overflow, but you also have a space leak, which is easier to explain in a few words.
When you perform the indexing u !! n
, u
looks like
1 : 2 : 1 : ... : 11 : <go thunk> : ... : <go thunk> : <zipWith3 thunk>
and you extract the last <go thunk>
, the one at index n
in the list u
. At this point every <go thunk>
has references to earlier elements of u
, so (almost) the entirety of u
has to be retained in memory (the first five or so elements are not actually needed).
The stack overflow is that in order to evaluate the 9999999th element of u, you first have to evaluate the 9999994th element, and in order to evaluate that you first have to evaluate the 9999989th element, etc. What to do after, say, evaluating the 9999994th element in order to finish evaluating the 9999999th element goes on the stack, and there's your stack overflow (which is also a sort of space leak, I suppose).
Both these problems can be solved by forcing the elements of the list u
either as you construct it or as you traverse it. Since you said you don't want someone to solve the space leak, I'll leave that part as an exercise, though there is a particularly slick and probably non-obvious way to do it.
Edited to add: The slick but possibly too-clever fix I had in mind was just to change the last line to
putStrLn $ show $ foldr ((:) $!) [] u !! n
Probably understanding how this works is a sufficient exercise in itself.
The more straightforward approach would be in max taldykin's answer, or to write a custom indexing function that forces the elements that it skips over before discarding them.
Here is a code that follows Reid Barton's answer:
{-# LANGUAGE BangPatterns #-}
import System.Environment (getArgs)
u :: [Int]
u = [1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1, 9, 1, 10, 1, 11]
++ go u' u'' u'''
where u' = drop 15 u
u'' = drop 10 u
u''' = drop 5 u
go ((!a):as) ((!b):bs) ((!c):cs)
= a + b - c
: go as bs cs
It uses BangPatterns extension to force evaluation of thunks. (I also added type annotation to use Int
s instead of Integer
s which is a bit faster.)
You can see that it runs in constant space (1M in use
is the relevant part of output):
$ ./xx 99999999 +RTS -t
50000001
<<ghc: 8000065016 bytes, 15319 GCs, 36596/44312 avg/max bytes residency (2 samples), 1M in use, 0.00 INIT (0.00 elapsed), 2.82 MUT (2.78 elapsed), 0.01 GC (0.06 elapsed) :ghc>>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With