Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why Is Dynamic Typing So Often Associated with Interpreted Languages?

Simple question folks: I do a lot of programming (professionally and personally) in compiled languages like C++/Java and in interpreted languages like Python/Javascript. I personally find that my code is almost always more robust when I program in statically typed languages. However, almost every interpreted language I encounter uses dynamic typing (PHP, Perl, Python, etc.). I know why compiled languages use static typing (most of the time), but I can't figure out the aversion to static typing in interpreted language design.

Why the steep disconnect? Is it part of the nature of interpreted languages? OOP?

like image 490
daveslab Avatar asked Sep 08 '09 13:09

daveslab


People also ask

Is dynamic typing interpreted?

Yes. All dynamic languages are interpreted language (but an interpreted language could be non-dynamic). The reason is simple : if it is dynamic, it needs an interpreter to perform the dynamism at the level of the binary compilation.

Why do dynamically typed programming languages tend to be type safe?

By this definition, most higher-level languages, including dynamically typed languages, are type safe, because any attempt to use a type incorrectly is guaranteed to cause an error (compile-time or run-time) in them.

What language is most often used for dynamic terms?

Popular dynamic programming languages include JavaScript, Python, Ruby, PHP, Lua and Perl. The following are generally considered dynamic languages: ActionScript.

What is dynamic typing languages?

Dynamically-typed languages are those (like JavaScript) where the interpreter assigns variables a type at runtime based on the variable's value at the time.


2 Answers

Interesting question. BTW, I'm the author/maintainer of phc (compiler for PHP), and am doing my PhD on compilers for dynamic languages, so I hope I can offer some insights.

I think there is a mistaken assumption here. The authors of PHP, Perl, Python, Ruby, Lua, etc didn't design "interpreted languages", they designed dynamic languages, and implemented them using interpreters. They did this because interpreters are much much easier to write than compilers.

Java's first implementation was interpreted, and it is a statically typed language. Interpreters do exist for static languages: Haskell and OCaml both have interpreters, and there used to be a popular interpreter for C, but that was a long time ago. They are popular because they allow a REPL, which can make development easier.

That said, there is an aversion to static typing in the dynamic language community, as you'd expect. They believe that the static type systems provided by C, C++ and Java are verbose, and not worth the effort. I think I agree with this to a certain extent. Programming in Python is far more fun than C++.

To address the points of others:

  • dlamblin says: "I never strongly felt that there was anything special about compilation vs interpretation that suggested dynamic over static typing." Well, you're very wrong there. Compilation of dynamic languages is very difficult. There is mostly the eval statement to consider, which is used extensively in Javascript and Ruby. phc compiles PHP ahead-of-time, but we still need a run-time interpreter to handle evals. eval also can't be analysed statically in an optimizing compiler, though there is a cool technique if you don't need soundness.

  • To damblin's response to Andrew Hare: you could of course perform static analysis in an interpreter, and find errors before run-time, which is exactly what Haskell's ghci does. I expect that the style of interpreter used in functional languages requires this. dlamblin is of course right to say that the analysis is not part of interpretation.

  • Andrew Hare's answer is predicated on the questioners wrong assumption, and similarly has things the wrong way around. However, he raises an interesting question: "how hard is static analysis of dynamic languages?". Very very hard. Basically, you'll get a PhD for describing how it works, which is exactly what I'm doing. Also see the previous point.

  • The most correct answer so far is that of Ivo Wetzel. However, the points he describes can be handled at run-time in a compiler, and many compilers exist for Lisp and Scheme that have this type of dynamic binding. But, yes, its tricky.

like image 96
Paul Biggar Avatar answered Oct 03 '22 22:10

Paul Biggar


I think it's because of the nature of interpreted languages, they want to be dynamic, so you CAN change things at runtime. Due to this a compiler never exactly knows what's the state of the program after the next line of code has been excecuted.

Imagine the following scenario(in Python):

import random foo = 1  def doSomeStuffWithFoo():     global foo     foo = random.randint(0, 1)  def asign():     global foo     if foo == 1:         return 20     else:         return "Test"   def toBeStaticallyAnalyzed():     myValue = asign()      # A "Compiler" may throw an error here because foo == 0, but at runtime foo maybe 1, so the compiler would be wrong with its assumption     myValue += 20   doSomeStuffWithFoo() # Foo could be 1 or 0 now... or 4 ;) toBeStaticallyAnalyzed() 

As you can hopefully see, a compiler wouldn't make any sense in this situation. Acutally it could warn you about the possibility that "myValue" maybe something else than a Number. But then in JavaScript that would fail because if "myValue" is a String, 20 would be implictily converted to a String too, hence no error would occur. So you might get thousands of useless warnings all over the place, and i don't think that that is the intend of a compiler.

Flexibility always comes with a price, you need to take a deeper look at your program, or program it more carefully, in other words you are the COMPILER in situations like the above.

So your solution as the compiler? - Fix it with a "try: except" :)

like image 22
Ivo Wetzel Avatar answered Oct 03 '22 20:10

Ivo Wetzel