Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't I add Integer to Double in Haskell?

Tags:

haskell

Why is it that I can do:

1 + 2.0

but when I try:

let a = 1
let b = 2.0
a + b

<interactive>:1:5:
    Couldn't match expected type `Integer' with actual type `Double'
    In the second argument of `(+)', namely `b'
    In the expression: a + b
    In an equation for `it': it = a + b

This seems just plain weird! Does it ever trip you up?

P.S.: I know that "1" and "2.0" are polymorphic constants. That is not what worries me. What worries me is why haskell does one thing in the first case, but another in the second!

like image 982
Andriy Drozdyuk Avatar asked Nov 24 '11 20:11

Andriy Drozdyuk


People also ask

How do you convert Int to double in Haskell?

The usual way to convert an Int to a Double is to use fromIntegral , which has the type (Integral a, Num b) => a -> b . This means that it converts an Integral type ( Int and Integer ) to any numeric type b , of which Double is an instance.

How do you multiply float and Int in Haskell?

In Haskell you can't multiply an Int by a Float because the * operator has type Num a => a -> a -> a - it takes two values of the same numeric type and gives you a result that is that type. You can multiply an Int by an Int to get an Int , or a Float by a Float to get a Float .

What does fromIntegral do in Haskell?

The workhorse for converting from integral types is fromIntegral , which will convert from any Integral type into any Num eric type (which includes Int , Integer , Rational , and Double ): fromIntegral :: (Num b, Integral a) => a -> b.

What is the difference between Int and Integer in Haskell?

What's the difference between Integer and Int ? Integer can represent arbitrarily large integers, up to using all of the storage on your machine. Int can only represent integers in a finite range.


1 Answers

The type signature of (+) is defined as Num a => a -> a -> a, which means that it works on any member of the Num typeclass, but both arguments must be of the same type.

The problem here is with GHCI and the order it establishes types, not Haskell itself. If you were to put either of your examples in a file (using do for the let expressions) it would compile and run fine, because GHC would use the whole function as the context to determine the types of the literals 1 and 2.0.

All that's happening in the first case is GHCI is guessing the types of the numbers you're entering. The most precise is a Double, so it just assumes the other one was supposed to be a Double and executes the computation. However, when you use the let expression, it only has one number to work off of, so it decides 1 is an Integer and 2.0 is a Double.

EDIT: GHCI isn't really "guessing", it's using very specific type defaulting rules that are defined in the Haskell Report. You can read a little more about that here.

like image 81
Jeff Burka Avatar answered Nov 12 '22 23:11

Jeff Burka