module Has (r,p,s) where
import Prelude ((==),Bool(..),otherwise,(||),Eq)
import qualified Data.List as L
filter :: (a -> Bool) -> [a] -> [a]
filter _pred [] = []
filter pred (x:xs)
| pred x = x : filter pred xs
| otherwise = filter pred xs
problem1: This filter
is copied from GHC
's library, but why it consumes a growing number of memory in contrast with the directly imported filter
, which consumes a constant number of memory.
elem :: (Eq a) => a -> [a] -> Bool
elem _ [] = False
elem x (y:ys) = x==y || elem x ys
problem2: This filter
is copied from GHC
's library, but why it consumes a growing number of memory like the directly used elem
, which also consumes a growing number of memory in contrast with the directly imported filter
.
r = L.filter (==1000000000000) [0..]
p = filter (==1000000000000) [0..]
s = 1000000000000 `elem` [0..]
GHC version:7.4.2 OS:Ubuntu 12.10 Compiled with -O2 to optimize
As the above filter
and elem
's definitions imply both p = filter (==1000000000000) [0..]
and s = 1000000000000 `elem` [0..]
's [0..]
should be garbage collected gradually. But both p
and s
consumes a growing number of memory. And r
which is defined with the directly imported filter
consumes a constant number of memory.
My question is why directly imported functions in GHC differ so much with functions I write with the source code copied from GHC Libraries. I wonded if there is something wrong with GHC?
I has a further question: The above code is abstracted from a project I writed, and the project also faces the problem of "consumes a growing number of memory, which should be garbage collected in theory". So I want to know that is there a way to find which variable takes up so much memory in GHC.
Thanks for your reading.
The cause of the memory consumption in ghci is not the code of filter
or elem
. (Although the rewrite rule for filter
in GHC.List
makes it a little better usually.)
Let's look at (part of) the core ghc-7.4.2 produced with -O2
(-ddump-simpl
). First for r
, using GHC.List.filter
:
Has.r1
:: GHC.Integer.Type.Integer
-> [GHC.Integer.Type.Integer] -> [GHC.Integer.Type.Integer]
[GblId,
Arity=2,
Unf=Unf{Src=<vanilla>, TopLvl=True, Arity=2, Value=True,
ConLike=True, Cheap=True, Expandable=True,
Guidance=IF_ARGS [0 0] 60 30}]
Has.r1 =
\ (x_awu :: GHC.Integer.Type.Integer)
(r2_awv :: [GHC.Integer.Type.Integer]) ->
case GHC.Integer.Type.eqInteger x_awu Has.p5 of _ {
GHC.Types.False -> r2_awv;
GHC.Types.True ->
GHC.Types.: @ GHC.Integer.Type.Integer x_awu r2_awv
}
Has.r :: [GHC.Integer.Type.Integer]
[GblId,
Str=DmdType,
Unf=Unf{Src=<vanilla>, TopLvl=True, Arity=0, Value=False,
ConLike=False, Cheap=False, Expandable=False,
Guidance=IF_ARGS [] 40 0}]
Has.r =
GHC.Enum.enumDeltaIntegerFB
@ [GHC.Integer.Type.Integer] Has.r1 Has.p3 Has.p2
Has.p3
is 0 :: Integer
, and Has.p2
is 1 :: Integer
. The rewrite rules (for filter
and enumDeltaInteger
) turned it into (with shorter names)
r = go fun 0 1
where
go foo x d = x `seq` (x `foo` (go foo (x+d) d))
fun n list
| n == 1000000000000 = n : list
| otherwise = list
which could probably be a bit more efficient if fun
was inlined, but the point is that the list to be filter
ed doesn't exist as such, it was fused away.
For p
on the other hand, without the rewrite rule(s), we get
Has.p1 :: [GHC.Integer.Type.Integer]
[GblId,
Unf=Unf{Src=<vanilla>, TopLvl=True, Arity=0, Value=False,
ConLike=False, Cheap=False, Expandable=False,
Guidance=IF_ARGS [] 30 0}]
Has.p1 = GHC.Enum.enumDeltaInteger Has.p3 Has.p2
Has.p :: [GHC.Integer.Type.Integer]
[GblId,
Str=DmdType,
Unf=Unf{Src=<vanilla>, TopLvl=True, Arity=0, Value=False,
ConLike=False, Cheap=False, Expandable=False,
Guidance=IF_ARGS [] 30 0}]
Has.p = Has.filter @ GHC.Integer.Type.Integer Has.p4 Has.p1
a top-level CAF for the list [0 .. ]
(Has.p1
), and Has.filter
applied to (== 1000000000000)
and the list.
So this one does create the actual list to be filtered - thus it's somewhat less efficient.
But normally (running a compiled binary), that's no problem in terms of memory consumption, since the list is garbage collected as it is consumed. However, for reasons that are beyond me, ghci does keep the list [0 .. ]
around when evaluating p
or s
(but that has its own copy of [0 .. ]
, so it's not unwanted sharing here), as can be gleaned from the -hT
heap profile (evaluating s
, so there's only one possible source for the list cells. ghci invoked with +RTS -M300M -hT -RTS
, so shortly after the memory usage reached 300M, ghci terminated):
So the cause of the memory consumption in ghci is the hardcoding of the list to be filtered. If you use Has.filter
with a list supplied at the prompt, the memory usage is constant as expected.
I'm not sure whether ghci retaining the list [0 .. ]
is a bug or intended behaviour.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With