As I understand it, GHC (the Glorious Glasgow Haskell Compiler) compiles Haskell to "Core", and then compiles that Core into machine code. Would it be at all practical to distribute Haskell programs as GHC Core, as if it were "bytecode"? Would there be any benefit to such a distribution? Why or why not?
This wouldn't be practical; GHC Core is not portable. For example, on a 32-bit machine, 64-bit arithmetic is compiled down to foreign function calls in the Core, but on a 64-bit machine, it uses native machine-word arithmetic.
More importantly, GHC can't actually read Core; it can print it out in a few formats, but there is no actual code to read any of those formats back in. I'm not sure if there would be any major obstacle to doing so, but it's been the documented situation for many years, so I wouldn't expect support to appear any time soon.
Core is also pretty close to Haskell in general; it's not clear what you'd buy from distributing code in that form. The time it takes to turn Haskell into Core is usually going to be less than the time it takes to do things like link the final program, so it usually wouldn't save much on compilation time at all.
Also, less checking is done to Core than Haskell source code (although I think -dcore-lint
would mitigate this), and sandboxing it effectively would be difficult (there's Safe Haskell, but no Safe Core). Of course, these disadvantages don't apply if the source of the bytecode is trusted.
Basically, GHC Core is very much a compiler's intermediate language, as opposed to portable bytecode formats designed for the purpose, like Python bytecode and the JVM.
As a side note, GHC does have a bytecode interpreter, as used by GHCi. The bytecode used there is also non-portable, so there are no advantages I can think of compared to the machine code GHC produces in normal operation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With