I am a bit confused on the concept of denotational semantics. As I understand, denotational semantics are supposed to describe how functions and expressions work in a particular programming language. What exactly is the proper form used to describe these functions and how they work? What exactly is the "domain", and how does one build a mapping function?
As an example, what would be the mapping function for "do X while Y"?
I have been reading lots of material online but it is a tough to understand. Would these descriptions be similar to a context free grammar?
Please let me know, thank you!
Think of denotation as a mapping from syntax to "meaning". You'll probably see it written in double brackets so that you would read [[3]] = 3
as "the denotation of the syntax [the number 3] is the number 3".
A simple example is arithmetic. Usually you have a denotation like
[[x + y]] = [[x]] + [[y]]
where the +
on the left is a syntactic plus and the +
on the right the arithmetic plus. To make this more clear, we might change to a lispy syntax.
[[(+ x y)]] = [[x]] + [[y]]
Now a hugely important question to ask is what is the range (codomain) of this mapping? So far I've been assuming that it's sufficient to see it as "some mathy domain where numbers and addition live", but that is likely insufficient. Importantly, your example will break it quickly.
[[do X while True]] = ???
As we don't necessarily have a mathy domain which includes a concept of non-termination.
In Haskell, this is solved by calling the mathematical domain a "lifted" domain or a CPO domain which essentially adds non-termination directly. For instance, if your unlifted domain is the integers I
then the lifted domain is ⊥ + I
where ⊥
is called "bottom" and it means non-termination.
That means we might write (in Haskell syntax)
[[let omega = 1 + omega in omega]] = ⊥
Boom. We have meaning—the meaning of an infinite loop is... nothing at all!
The trick with lifted domains in Haskell is that because Haskell is lazy (non-strict) it's possible to have interesting interactions of datatypes and ⊥
. For instance, if we have type IntList = Cons Int IntList | Nil
then the lifted domain over IntList
includes ⊥
directly (a complete infinite loop) as well as things like Cons ⊥ ⊥
which are still not fully resolved, but provide more information than plain old ⊥
.
And I write "more information" deliberately. CPOs form a partial order (that's the PO) of "definedness". ⊥
is maximally undefined and so it's <=
to anything else in the CPO. Then you get stuff like Cons ⊥ ⊥ <= Cons 3 ⊥
which forms a chain in your partial order. You often then say that if x <= y
then "y
contains more information than x
" or "y
is more defined than x
".
One of the biggest points of this to me is that by defining this CPO structure in our mathy domain of denotation we can talk really precisely about the differences between strict and non-strict evaluation. In a strict language (or really, in strict domains which your language may or may not have some of), your CPOs are all "flat" in that you either have fully-defined results or ⊥
and nothing else. Laziness occurs exactly when your CPO isn't flat.
Another important point is the notion that "you can't pattern match on bottom"... which, if we think about bottom as an infinite loop (though with this new abstract model it doesn't have to mean that... it could be a segfault, for instance) then this adage is nothing more than a different way of expressing the halting problem. It's consequence is that all sensible functions must be "monotonic" in that if x <= y
then f x <= f y
. If you spend some time with that notion you'll see that it outlaws functions which have different non-bottom behavior whether their arguments are bottom or not. For instance, the halting oracle is non-monotonic
halting (⊥) = False -- we can't pattern match on bottom!
halting _ = True
But the "broken halting oracle" is
hahahalting (⊥) = ⊥ -- we're still pattern matching,
-- but at least the behavior is right
hahahalting _ = True
which we write using seq
hahahalting x = x `seq` True -- valid Haskell
This also brings into sharp relief the dangers of non-monotonic functions, like Haskell's spoon
. We can write them by taking advantage of denotationally unsound exception catching... but it can cause very wonky behavior if we're not careful.
There are plenty more things you can learn from denotational semantics, so I'll offer Edward Z. Yang's notes on denotational semantics.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With