Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does runtime generally use an imperative-like interpretation of functional language code

I have a general question about interpreters of functional languages:

Are there actually any advantages to using a functional language versus an imperative language at runtime (or that make their way to the interpreter)?

None of the questions I saw (such as this) really get at this question, and searches are flooded with arguments over definitions of different languages.

EDIT: Stripped down to the only question that I ended up needing answered.

like image 980
St-Ste-Ste-Stephen Avatar asked Jun 20 '12 03:06

St-Ste-Ste-Stephen


People also ask

Are functional programming languages imperative?

With an imperative approach, a developer writes code that specifies the steps that the computer must take to accomplish the goal. This is sometimes referred to as algorithmic programming. In contrast, a functional approach involves composing the problem as a set of functions to be executed.

Why functional languages are not widely used compared to imperative languages?

It is a much more complex syntactic structure than imperative language. Concurrent execution is difficult to design and use. Concurrent execution is easy to design and use. The semantics are difficult to understand.

Are all object oriented languages imperative?

All OOP Programs contains State. They use Mutable Data and Data Structures. Like FP, We can write complete programmings by using Immutable Data, but it does not enforce this rule. Object Oriented Programming (OOP) is a super set of Imperative Programming.


1 Answers

The short answer is: everything gets compiled to some low-level language in the end (assembly or virtual machine language). Functional and imperative languages are on equal footing: they have to compile away their abstraction mechanisms to fit what's available on the machine.
Where language make a difference is how closely they match the underlying low-level code (by providing low-level features to optimize code by hand when desirable) and which optimizations they enable or make easier by having a clean semantics providing strong reasoning guarantees.

For example, when compiling to native code, recursion is not (in general) compiled into loops, but into jumps to labels, just as loops are: most assembly languages have neither loops nor recursion. Of course, if you compile to a virtual machine that has loops but no jumps, you need to produce loops; this is a problem for example on the Java Virtual Machine, because general tail calls are more expressive than loops alone and you need to work around that limitation, giving up some efficiency.

The "advantages" of functional programs may be that because the semantics is better-behaved, you can reason more easily about the program, and for example express optimizations in a simpler way. A lot of compilers nowadays use the Single Static Assignment (SSA) intermediate form, which is basically a low-level functional language¹ -- though it was discovered independently by people of the compiler community. Most optimizations are easier to do when you have eliminated variable mutations, for example, and a variable keeps the same value in all its scope. There are some techniques to do register allocation in more efficient ways on such functional intermediate forms.

¹: See Andrew Appel's short 1998 article: SSA is Functional Programming; if you are interested in details about the SSA form, here are some reading notes about the relation between SSA and other functional intermediate forms, such as CPS.

You can also get optimization advantages from purity (the absence of side-effects, or at least a good control on which computation will be side-effect-free) and static typing. From purity you can derive powerful optimizations such as deforestation or fusion (eliminating intermediate data structures), and from typing you can get strong guarantees on the shape of your values (that's why some dynamic languages try to allow some restricted form of type annotations for optimization purposes) that allow to generate better code.

Regarding access to low-level features: Fortran, C and C++ are probably the best, and most widely available, "can go very low-level" languages. Some languages try to provide some of those capabilities: for example ATS (originally a functional programming language, though it's so bare metal that it's rather hard to see) provides good control on stack-allocation version heap-allocation, both Haskell and the CLR (C#,etc.) provide unboxed composite types as a special case of such low-level reasoning, and similarly Rust is trying to provide you ways to make low-level decisions about memory consumption.
However, while this is certainly useful when you have isolated a small section of code that is critical to your performances, and you want to optimize the hell out of it (giving up some flexibility/simplicity/maintainability), this will not change your life in everyday situations, unless you are an embedded/kernel programmer. You will probably get more performance gain overall from a productive language that allows you to use the right abstraction level, and spend more time on choosing the correct design and algorithms for your problem. Of course, one could want to have both, and it is possible but difficult.

like image 82
gasche Avatar answered Sep 28 '22 07:09

gasche