Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what's the meaning of "you do computations in Haskell by declaring what something is instead of declaring how you get it"?

Tags:

haskell

Recently I am trying to learn a functional programming language and I choosed Haskell.

Now I am reading learn you a haskell and here is a description seems like Haskell's philosophy I am not sure I understand it exactly: you do computations in Haskell by declaring what something is instead of declaring how you get it.

Suppose I want to get the sum of a list.

In a declaring how you get it way: get the total sum by add all the elements, so the code will be like this(not haskell, python):

sum = 0
for i in l:
    sum += i
print sum

In a what something is way: the total sum is the sum of the first element and the sum of the rest elements, so the code will be like this:

sum' :: (Num a) => [a] -> a
sum' [] = 0
sum' (x:xs) = x + sum' xs

But I am not sure I get it or not. Can some one help? Thanks.

like image 717
WKPlus Avatar asked May 23 '14 04:05

WKPlus


3 Answers

Imperative and functional are two different ways to approach problem solving.

Imperative (Python) gives you actions which you need to use to get what you want. For example, you may tell the computer "knead the dough. Then put it in the oven. Turn the oven on. Bake for 10 minutes.".

Functional (Haskell, Clojure) gives you solutions. You'd be more likely to tell the computer "I have flour, eggs, and water. I need bread". The computer happens to know dough, but it doesn't know bread, so you tell it "bread is dough that has been baked". The computer, knowing what baking is, knows now how to make bread. You sit at the table for 10 minutes while the computer does the work for you. Then you enjoy delicious bread fresh from the oven.

You can see a similar difference in how engineers and mathematicians work. The engineer is imperative, looking at the problem and giving workers a blueprint to solve it. The mathematician defines the problem (solve for x) and the solution (x = -----) and may use any number of tried and true solutions to smaller problems (2x - 1 = ----- => 2x = ----- + 1) until he finally finds the desired solution.

It is not a coincidence that functional languages are used largely by people in universities, not because it is difficult to learn, but because there are not many mathematical thinkers outside of universities. In your quotation, they tried to define this difference in thought process by cleverly using how and what. I personally believe that everybody understands words by turning them into things they already understand, so I'd imagine my bread metaphor should clarify the difference for you.

EDIT: It is worth noting that when you imperatively command the computer, you don't know if you'll have bread at the end (maybe you cooked it too long and it's burnt, or you didn't add enough flour). This is not a problem in functional languages where you know exactly what each solution gives you. There is no need for trial and error in a functional language because everything you do will be correct (though not always useful, like accidentally solving for t instead of x).

like image 91
Aarowaim Avatar answered Oct 16 '22 07:10

Aarowaim


The missing part of the explanations is the following.

The imperative example shows you step by step how to compute the sum. At no stage you can convince yourself that it is indeed a sum of elements of a list. For example, there is no knowing why sum=0 at first; should it be 0 at all; do you loop through the right indices; what sum+=i gives you.

sum=0 -- why? it may become clear if you consider what happens in the loop,
      -- but not on its own
for i in l:
  sum += i -- what do we get? it will become clear only after the loop ends
      -- at no step of iteration you have *the sum of the list*
      -- so the step on its own is not meaningful

The declarative example is very different in this respect. In this particular case you start with declaring that the sum of an empty list is 0. This is already part of the answer of what the sum is. Then you add a statement about non-empty lists - a sum for a non-empty list is the sum of the tail with the head element added to it. This is the declaration of what the sum is. You can demonstrate inductively that it finds the solution for any list.

Note this proof part. In this case it is obvious. In more complex algorithms it is not obvious, so the proof of correctness is a substantial part - and remember that the imperative case only makes sense as a whole.

Another way to compute sum, where, hopefully, declarativeness and proovability become clearer:

sum [] = 0 -- the sum of the empty list is 0
sum [x] = x -- the sum of the list with 1 element is that element
sum xs = sum $ p xs where -- the sum of any other list is
                          -- the sum of the list reduced with p
  p (x:y:xs) = x+y : p xs -- p reduces the list by replacing a pair of elements
                          -- with their sum
  p xs = xs -- if there was no pair of elements, leave the list as is

Here we can convince ourselves that: 1. p makes the list ever shorter, so the computation of the sum will terminate; 2. p produces a list of sums, so by summing ever shorter lists we get a list of just one element; 3. because (+) is associative, the value produced by repeatedly applying p is the same as the sum of all elements in the original list; 4. we can demonstrate the number of applications of (+) is smaller than in the straightforward implementation.

In other words, the order of adding the elements doesn't matter, so we can sum the elements ([a,b,c,d,e]) in pairs first (a+b, c+d), which gives us a shorter list [a+b,c+d,e], whose sum is the same as the sum of the original list, and which now can be reduced in the same way: [(a+b)+(c+d),e], then [((a+b)+(c+d))+e].

like image 29
Sassa NF Avatar answered Oct 16 '22 07:10

Sassa NF


Robert Harper claims in his blog that "declarative" has no meaning. I suppose he is talking about a clear definition there, which I usually think of as more narrow then meaning, but the post is still worth checking out and hints that you might not find as clear an answer as you would wish.

Still, everybody talks about "declarative" and it feels like when we do we usually talk about the same thing. i.e. Give a number of people two different apis/languages/programs and ask them which is the most declarative one and they will usually pick the same.

The confusing part to me at first was that your declarative sum

sum' [] = 0
sum' (x:xs) = x + sum' xs

can also be seen as an instruction on how to get the result. It's just a different one. It's also worth noting that the function sum in the prelude isn't actually defined like that since that particular way of calculating the sum is inefficient. So clearly something is fishy.

So, the "what, not how" explanation seem unsatisfactory to me. I think of it instead as declarative being a "how" which in addition have some nice properties. My current intuition about what those properties are is something similar to:

  • A thing is more declarative if it doesn't mutate any state.

  • A thing is more declarative if you can do mathy transformations on it and the meaning of the thing sort of remains intact. So given your declarative sum again, if we knew that + is commutative there is some justification for thinking that writing it like sum' xs + x should yield the same result.

  • A declarative thing can be decomposed into smaller thing and still have some meaning. Like x and sum' xs still have the same meaning when taken separately, but trying to do the same with the sum += x of python doesn't work as well.

  • A thing is more declarative if it's independent of the flow of time. For example css doesn't describe the styling of a web page at page load. It describes the styling of the web page at any time, even if the page would change.

  • A thing is more declarative if you don't have to think about program flow.

Other people might have different intuitions, or even a definition that I'm not aware of, but hopefully these are somewhat helpful regardless.

like image 3
monocell Avatar answered Oct 16 '22 07:10

monocell