Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why some common standards were redefined in F#?

Tags:

f#

I have started F# a few days ago and I do not understand why some things which have been common for very long have been redefined?

  • for example /* my comment */ is common in several languages and has been like that since decades; was there really a reason to go with (* my comment *)?
  • having variables defined with type variablename, such as int i is also very common, would it have been harder to parse to leave the type before and not after?
  • similarly, in almost all languages you have != for inequality; was changing it to <> to avoid the clash with something else, or just 'to be different'?
  • using <- with mutables but = for immutable.. what is the reasoning behind it?

I'm sure there are a few others.

I'm not looking for 'opinions' about if the changes are good or not, but I would be curious to know if there is some specific reasoning behind them, for example to ease parsing, etc because I'm curious to see if the benefit of these changes outweighs the harm of not following generally adopted conventions.

like image 374
Thomas Avatar asked Dec 13 '22 11:12

Thomas


1 Answers

I have started F# a few days ago and I do not understand why some things which have been common for very long have been redefined?

  • for example /* my comment */ is common in several languages and has been like that since decades; was there really a reason to go with (* my comment *)?

F♯ is highly based on OCaml, which is based on Caml, which is based on Standard ML, which is based on ML, which was influenced by ISWIM, which was influenced by ALGOL-60 and LISP.

(* / *) is ML's comment syntax. ML was designed in the early 1970s. However, using parentheses for comments is even older, e.g. Pascal used {* / *}, Algol-68 used { / }, and putting comments in parentheses next to equations or proofs has been done for much longer before that.

Newspeak, which is a pretty new language, also uses (* / *), for example.

  • having variables defined with 'type variablename', such as 'int i' is also very common, would it have been harder to parse to leave the type before and not after?

Having the type follow the name is also very common, so "common" is not really a good reason. There are advantages to having the type after the name, for example, the syntax for inferred types becomes just "leave out the type", whereas most languages that put the type before the name need some kind of "pseudo-type" instead. E.g. in Java and C♯, you have to say var foo, in C++ auto foo, and so on.

Note that many modern languages follow this syntax, e.g. Swift, Julia, and also Python's type hints. More importantly, Kotlin, Scala, TypeScript, and Go, all of which sit squarely in the C family of syntax, have the type after the identifier.

Also note that F# allows both type information and deconstruction syntax. The type comes after and the deconstruction comes before the identifier. Since deconstruct syntax should match the same syntax in pattern matching, the language designers didn't really have a choice but to put the type info after the identifier (otherwise, since DU cases can have the same name as the containing type, ambiguity would arise and parsing would be impossible). Example:

type Age<'T> = Age of 'T
let f (Age x) = x  // deconstruct
let g (Age x: Age<int>) = x  // deconstruct + type info
let h (x: Age<int>) = x  // only type info
  • similarly, in almost all languages you have != for inequality; was changing it to <> to avoid the clash with something else, or just 'to be different'?

Again, <> is used in multiple languages, some of them very widely-used, such as SQL, also Pascal and its successors (Modula-2, Oberon, Delphi) and all BASIC languages including MS's Visual Basic. Algol-68 used (modern implementations use /=, equality is or =), Haskell uses /= like modern Algol-68 implementations. Mathematica uses =!=, and Scala also uses it for type in-equality (but uses != for values). XPath, XSLT and XQuery all use <>, but also ne.

In many mainstream languages, equality and binding are easily confused (e.g. in C == vs. =). Making them visibly different is an advantage. (In C, some coding standards require Yoda conditionals to prevent common errors.)

  • using <- with mutables but = for immutable.. what is the reasoning behind it?

Binding a constant, immutable "variable" (in the mathematical sense), and mutating a mutable reference are two fundamentally different operations. It makes sense to visually distinguish them.

has been used to denote mutating a binding in mathematics since before programming even existed. It is also used in Smalltalk (when was removed from ASCII, they replaced it with :=, which is also used in Pascal, for example).

So, in short, I would challenge the premise of your question: F♯ did not redefine things that have been common for very long, it used already existing definitions that had been common for very long.

Note that a lot of this comes down to familiarity. I, personally, learned BASIC, Pascal, and Rexx as my first languages, followed by Smalltalk and Eiffel. In university, we learned Python, Haskell, and Java. My current favorite language is Ruby.

When I first encountered Java, it looked incredibly strange to me, and I still have trouble groking C-style syntax, even though I almost exclusively read and write ECMAScript at the moment.

like image 124
Jörg W Mittag Avatar answered Dec 27 '22 18:12

Jörg W Mittag