Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Can a pure function have free variables?

For example, a referentially transparent function with no free variables:

g op x y = x `op` y

A now now a function with the free (from the point-of-view of f) variables op and x:

x = 1
op = (+)
f y = x `op` y

f is also referentially transparent. But is it a pure function?

If it's not a pure function, what is the name for a function that is referentially transparent, but makes use of 1 or more variables bound in an enclosing scope?


Motivation for this question:

It's not clear to me from Wikipedia's article:

The result value need not depend on all (or any) of the argument values. However, it must depend on nothing other than the argument values.

(emphasis mine)

nor from Google searches whether a pure function can depend on free (in the sense of being bound in an enclosing scope, and not being bound in the scope of the function) variables.

Also, this book says:

If functions without free variables are pure, are closures impure?

The function function (y) { return x } is interesting. It contains a free variable, x. A free variable is one that is not bound within the function. Up to now, we’ve only seen one way to “bind” a variable, namely by passing in an argument with the same name. Since the function function (y) { return x } doesn’t have an argument named x, the variable x isn’t bound in this function, which makes it “free.”

Now that we know that variables used in a function are either bound or free, we can bifurcate functions into those with free variables and those without:

  • Functions containing no free variables are called pure functions.
  • Functions containing one or more free variables are called closures.

So what is the definition of a "pure function"?

like image 972
Matt Fenwick Avatar asked Apr 07 '14 14:04

Matt Fenwick


People also ask

What are characteristics of pure function?

In computer programming, a pure function is a function that has the following properties: the function return values are identical for identical arguments (no variation with local static variables, non-local variables, mutable reference arguments or input streams), and.

What is not a pure function?

An impure function is kind of the opposite of a pure one - it doesn't predictably produce the same result given the same inputs when called multiple times, and may cause side-effects. Let's have a look at some examples.

Why do pure functions have no side effects?

A pure function does not have side-effects and its result does not depend on anything other than its inputs. A pure function guarantees that for a given input it will produce the same output no matter how many times it is called. The addOne function is pure.

What is meant by pure function?

A Pure Function is a function (a block of code) that always returns the same result if the same arguments are passed. It does not depend on any state or data change during a program's execution. Rather, it only depends on its input arguments.


2 Answers

To the best of my understanding "purity" is defined at the level of semantics while "referentially transparent" can take meaning both syntactically and embedded in lambda calculus substitution rules. Defining either one also leads to a bit of a challenge in that we need to have a robust notion of equality of programs which can be challenging. Finally, it's important to note that the idea of a free variable is entirely syntactic—once you've gone to a value domain you can no longer have expressions with free variables—they must be bound else that's a syntax error.

But let's dive in and see if this becomes more clear.


Quinian Referential Transparency

We can define referential transparency very broadly as a property of a syntactic context. Per the original definition, this would be built from a sentence like

New York is an American city.

of which we've poked a hole

_ is an American city.

Such a holey-sentence, a "context", is said to be referentially transparent if, given two sentence fragments which both "refer" to the same thing, filling the context with either of those two does not change its meaning.

To be clear, two fragments with the same reference we can pick would be "New York" and "The Big Apple". Injecting those fragments we write

New York is an American city.
The Big Apple is an American city.

suggesting that

_ is an American city.

is referentially transparent. To demonstrate the quintessential counterexample, we might write

"The Big Apple" is an apple-themed epithet referring to New York.

and consider the context

"_" is an apple-themed epithet referring to New York.

and now when we inject the two referentially identical phrases we get one valid and one invalid sentence

"The Big Apple" is an apple-themed epithet referring to New York.
"New York" is an apple-themed epithet referring to New York.

In other words, quotations break referential transparency. We can see how this occurs by causing the sentence to refer to a syntactic construct instead of purely the meaning of that construct. This notion will return later.


Syntax v Semantics

There's something confusing going on in that this definition of referential transparency above applies directly to English sentences of which we build contexts by literally stripping words out. While we can do that in a programming language and consider whether such a context is referentially transparent, we also might recognize that this idea of "substitution" is critical to the very notion of a computer language.

So, let's be clear: there are two kinds of referential transparency we can consider over lambda calculus—the syntactic one and the semantic one. The syntactic one requires we define "contexts" as holes in the literal words written in a programming language. That lets us consider holes like

let x = 3 in _

and fill it in with things like "x". We'll leave the analysis of that replacement for later. At the semantic level we use lambda terms to denote contexts

\x -> x + 3         --    similar to the context "_ + 3"

and are restricted to filling in the hole not with syntax fragments but instead only valid semantic values, the action of that being performed by application

(\x -> x + 3) 5 
==> 
5 + 3
==> 
8

So, when someone refers to referential transparency in Haskell it's important to figure out what kind of referential transparency they're referring to.


Which kind is being referred to in this question? Since it's about the notion of an expression containing a free variable, I'm going to suggest that it's syntactic. There are two major thrusts for my reasoning here. Firstly, in order to convert a syntax to a semantics requires that the syntax be valid. In the case of Haskell this means both syntactic validity and a successfully type check. However, we'll note that a program fragment like

x + 3

is actually a syntax error since x is simply unknown, unbound leaving us unable to consider the semantics of it as a Haskell program. Secondly, the very notion of a variable such as one that can be let-bound (and consider the difference between "variable" as it refers to a "slot" such as an IORef) is entirely a syntactic construct—there's no way to even talk about them from inside the semantics of a Haskell program.

So let's refine the question to be:

Can an expression containing free variables be (syntactically) referentially transparent?

and the answer is, uninterestingly, no. Referential transparency is a property of "contexts", not expressions. So let's explore the notion of free variables in contexts instead.


Free variable contexts

How can a context meaningfully have a free variable? It could be beside the hole

E1 ... x ... _ ... E2

and so long as we cannot insert something into that syntactic hole which "reaches over" and affects x syntactically then we're fine. So, for instance, if we fill that hole with something like

E1 ... x ... let x = 3 in E ... E2

then we haven't "captured" the x and thus can perhaps consider that syntactic hole to be referentially transparent. However, we're being nice to our syntax. Let's consider a more dangerous example

do x <- foo
   let x = 3
   _
   return x

Now we see that the hole we've provided in some sense has dominion over the later phrase "return x". In fact, if we inject a fragment like "let x = 4" then it indeed changes the meaning of the whole. In that sense, the syntax here is no referentially transparent.

Another interesting interaction between referential transparency and free variables is the notion of an assigning context like

let x = 3 in _

where, from an outside perspective, both phrases "x" and "y" are reference the same thing, some named variable, but

let x = 3 in x   ==/==  let x = 3 in y

Progression from thorniness around equality and context

Now, hopefully the previous section explained a few ways for referential transparency to break under various kinds of syntactic contexts. It's worth asking harder questions about what kinds of contexts are valid and what kinds of expressions are equivalent. For instance, we might desugar our do notation in a previous example and end up noticing that we weren't working with a genuine context, but instead sort of a higher-order context

foo >>= \x -> (let x = 3 in ____(return x)_____)

Is this a valid notion of context? It depends a lot on what kind of meaning we're giving the program. The notion of desugaring the syntax already implies that the syntax must be well-defined enough to allow for such desugaring.

As a general rule, we must be very careful with defining both contexts and notions of equality. Further, the more meaning we demand the fragments of our language to take on the greater the ways they can be equal and the fewer the valid contexts we can build.

Ultimately, this leads us all the way to what I called "semantic referential transparency" earlier where we can only substitute proper values into a proper, closed lambda expression and we take the resulting equality to be "equality as programs".

What this ends up meaning is that as we impute more and more meaning on our language, as we begin to accept fewer and fewer things as valid, we get stronger and stronger guarantees about referential transparency.


Purity

And so this finally leads to the notion of a pure function. My understanding here is (even) less complete, but it's worth noting that purity, as a concept, does not much exist until we've moved to a very rich semantic space—that of Haskell semantics as a category over lifted Complete Partial Orders.

If that doesn't make much sense, then just imagine purity is a concept that only exists when talking about Haskell values as functions and equality of programs. In particular, we examine the collection of Haskell functions

trivial :: a -> ()
trivial x = x `seq` ()

where we have a trivial function for every choice of a. We'll notate the specific choice using an underscore

trivial_Int :: Int -> ()
trivial_Int x = x `seq` ()

Now we can define a (very strictly) pure function to be a function f :: a -> b such that

trivial_b . f = trivial_a

In other words, if we throw out the result of computing our function, the b, then we may as well have never computed it in the first place.

Again, there's no notion of purity without having Haskell values and no notion of Haskell values when your expressions contain free variables (since it's a syntax error).


So what's the answer?

Ultimately, the answer is that you can't talk about purity around free variables and you can break referential transparency in lots of ways whenever you are talking about syntax. At some point as you convert your syntactic representation to its semantic denotation you must forget the notion and names of free variables in order to have them represent the reduction semantics of lambda terms and by this point we've begun to have referential transparency.

Finally, purity is something even more stringent than referential transparency having to do with even the reduction characteristics of your (referentially transparent) lambda terms.

By the definition of purity given above, most of Haskell isn't pure itself as Haskell may represent non-termination. Many feel that this is a better definition of purity, however, as non-termination can be considered a side effect of computation instead of a meaningful resultant value.

like image 132
J. Abrahamson Avatar answered Sep 30 '22 05:09

J. Abrahamson


The Wikipedia definition is incomplete, insofar a pure function may use constants to compute its answer.

When we look at

increment n = 1+n

this is obvious. Perhaps it was not mentioned because it is that obvious.

Now the trick in Haskell is that not only top level values and functions are constants, but inside a closure also the variables(!) closed over:

add x = (\y -> x+y)

Here x stands for the value we applied add to - we call it variable not because it could change within the right hand side of add, but because it can be different each time we apply add. And yet, from the point of view of the lambda, x is a constant.

It follows that free variables always name constant values at the point where they are used and hence do not impact purity.

like image 23
Ingo Avatar answered Sep 30 '22 06:09

Ingo