I'd like to know what kind of real life problems can be tackled by "duality methods" in functional programming. More precisely, I'd like to know whether someone did actually use a duality method like the ones I present below, or whether there are other interesting examples. I'd be particularly interested in existing implementations, probably in Haskell.
[Since the majority of people which will be interested in this question likely know Haskell, let me please add the Haskell tag, even if the question is quite language independent]
Let me explain what I mean by duality (by lack of a better name) through a few examples. The first one is the real numbers. Assume the existence of a Integer
and a Rational
type, and define a real number as a function (pardon my Haskell, I'm no hardcore haskeller)
type Real = Integer -> Rational
such that whenever x :: Real
denotes the real number x, x n
yields a rational number which is within 2^(-n)
of x.
Now one can do
(+) :: Real -> Real -> Real (+) x y n = (x $ n + 1) + (y $ n + 1)
or likewise for other arithmetic operations. Given a continuous real function f, one can also compute f x
as soon as one can compute a modulus of continuity for f
.
This has the advantage that one can write natural looking code, and at the end, get a result at the desired level of precision automatically. However, it is no longer possible to compare real numbers for equality. The only kind of comparison possible between x
and y
is x < y + eps
.
Another example of duality is this question on probability measures, which triggered the present question in my head. Let us write
type Measure a = (a -> Double) -> Double
and define measures as integration procedures against functions. In the linked question, I show how natural it is in this framework to express concepts like convolution or pushforward which are much more difficult (computationally, but also theoretically) to define at the level of probability densities.
It allows one to compose building blocks from probability theory, and in principle allows one to build complex Monte Carlo procedures, and even allows one to work with explicit probability densities (at the expense of numerical integration). I'd be especially interested in any attempt at a real world library on this topic.
Another example that I have in mind, but did not quite formalize yet is the notion of vector fields (from differential geometry), that one can express as differentiation operators. For this, one needs a suitable type of "smooth real valued functions", and then a vector field is like this:
type VectorField = SmoothFunction -> SmoothFunction
such that v (f * g) = f * (v g) + g * (v f)
.
Of course, describing a sheaf of regular functions in say Haskell should not be easy. But by doing that, we could express all the stuff from differential geometry in a totally coordinate independant way, and plug coordinates at the very end.
There are other examples, eg. Taylor series have been discussed in Sigfpe's blog (I can't find this particular post though), where an analytic function is the following type:
type AnalyticFunction = Double -> Integer -> [Double]
and where f x n
returns the n
first partial sums of the Taylor expansion of f
around x
. This allows us to seamlessly write all kind of arithmetic on analytic functions, including stuff like f / g
where f
and g
both can vanish at a point (along with some of their derivatives), or even f^(-1)
(provided f'
does not vanish). At the end, only the necessary terms of the intermediate series are computed to yield the value of a given expression.
Functional programming is a programming paradigm in which we try to bind everything in pure mathematical functions style. It is a declarative type of programming style. Its main focus is on “what to solve” in contrast to an imperative style where the main focus is “how to solve”.
Lambda expressions have to be pure; they should not have any side-effects. For purity, not only functions have to avoid mutating state, they should not also depend on state that may be mutated from the outside of the lambda expressions.
What Does Parallel Functional Programming Mean? Parallel functional programming refers to a specific philosophy of computer science that uses functional programming in conjunction with parallelism to work with declarative programming in specific ways.
Pure FunctionsA function is a pure function if: The execution of the function has no side effects. The return value of the function depends only on the input parameters passed to the function.
The common feature of your example is the representation of some (mathematical) object by functions. This is common in functional languages, but not as practical as in mathematics because functions in programs are used extensionally (you cannot inspect their definitions, only observe their actions on arguments), and only with computable operations (you can only observe a finite number of arguments).
In mathematics, you don't bother with such stuff, for example you very often say "if f is analytic, then let's (a_n) be its sequence of coefficients, and...". In a computer language, if you start with a function of type Double -> Integer -> [Double]
, it will be painful to convert it to a representation where you can easily recover the coefficients. In programming languages, function really are black boxes.
For this reason, programmers often try to use explicit data representations instead of function black boxes. You can easily obtain a function from a data representation (its a kind of evaluation or interpretation), while the other way around can be more difficult, less efficient, etc. See Conal Elliott's “Everything is a function” in Haskell?.
Functions are however still used in cases where we really want extensional objects, that can only be observed instead of inspected. For each possible observation on the object you want to define, you give a function that realize this observation. In your example, you only have one function because you only have one observation. This is the core idea of Object Oriented Programming as defined by William Cook in his On Understanding Data Abstraction, Revisited paper.
I think the reason why you relate this to the term "duality" (a term that is, in the Haskell intelligentsia, rather related to category-theoretic concepts) is that the shift from an object to some particular form of observation of it is sometimes called duality in maths, and has the effect of adding functions everywhere. For example, there is the classical example of the dual of a vector space, and in particular the bidual construction which is really a conversion from a vector to its observations by linear functions : you switch from V
to (V -> K) -> K
, for K
the field underlying your vector space.
(Would one think of continuations reading my last example ? Of course those are related, as this representation of continuations is really an "observation" of concrete evaluation contexts, represented by their action on values.)
Your representation of probability measures is actually used to define probability measure monads in functional languages. There are different ways to define probability monads. See for example http://www.cs.tufts.edu/~nr/pubs/pmonad-abstract.html by Norman Ramsey and Avi Pfeffer. Most real-world implementation of probability DSL however use a more concrete representation such as a [(prob,event)]
list of pair (Haskell probability library and OCaml HANSEI).
Finally, for an example of representation of real number as functions, see Russel O'Connor's A Monadic, Functional Implementation of Real Numbers. Numerous representation of "computable" numbers exist and have different merits, and most of them are based on sequences and may therefore be represented as Integer -> ...
functions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With