I have the following code:
{-# NOINLINE i2i #-}
i2i :: Int -> Integer
i2i x = toInteger x
main = print $ i2i 2
Running GHC with -ddump-simpl flag gives:
[Arity 1
NoCafRefs
Str: DmdType U(L)]
Main.i2i = GHC.Real.toInteger1
Seems that conversion from Int to Integer is lazy. Why is it so - is there a case when I can have
(toInteger _|_ ::Int) /= _|_
?
Edit: the question has more to do with GHC strictness analyzer, than with laziness per se. This code was derived from exploring standard mean function:
--mean :: Integer -> Integer -> [Integer] -> Double
mean :: Integer -> Int -> [Integer] -> Double
mean acc n [] = fromIntegral acc / fromIntegral n
mean acc n (x:xs) = mean (acc + x) (n + 1) xs
main = print $ mean 0 0 [1..1000000]
This code runs on O(N) space. When I uncomment first line, space consumption changes to O(1). Seems that it comes down to fromIntegral call, which in turn comes down to toInteger. Strictness analyzer somehow cannot infer that conversion is strict, which seems strange to me.
What's the difference between Integer and Int ? Integer can represent arbitrarily large integers, up to using all of the storage on your machine. Int can only represent integers in a finite range. The language standard only guarantees a range of -229 to (229 - 1).
The workhorse for converting from integral types is fromIntegral , which will convert from any Integral type into any Num eric type (which includes Int , Integer , Rational , and Double ): fromIntegral :: (Num b, Integral a) => a -> b.
Haskell has two integral types, namely Int and Integer . Int is the type of limited-precision integers; this means that there is a smallest integer of type Int , namely minBound , and a greatest integer of type Int , namely maxBound .
Response to your edit: the dangers of O(N) space leaks for accumulating parameters are well known, at least to Haskell programmers. What ought to be well known but isn't is that no matter what the language, you should never trust to the optimizer to provide asymptotic guarantees for the space and time behavior of your programs. I don't understand the implications of simple optimizers I've written myself, let alone something hairy like GHC's front end, what with a strictness analyzer, inliner, and all the rest.
As to your two questions,
Why doesn't GHC's strictness analyzer optimize this particular code, when it does optimize very similar code?
Who knows?? (Maybe Simon PJ knows, maybe not.) If you care about performance, you shouldn't be relying on the strictness analyzer. Which brings us to the second, implied question:
How can I avoid O(N) space costs on this function and on every other function that uses accumulating parameters?
By putting strictness annotations on the accumluating parameters that force them to be evaluated at each tail-recursive call.
I think you're looking at this the wrong way. Consider the following, silly fragment of code
let x = [undefined]
let y = map toInteger x
If we evaluate
y == []
we get False
, whereas if we evaluate
head y
we get an undefined
exception. There's no reason that applying map
or comparing y
with []
should diverge just because the only element of x
is undefined. That's the essence of non-strictness.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With