I understand the appeal of pure functional languages like Haskell where you can keep track of side effects like disk I/O using monads.
Why aren't all system calls considered side effects? For example, heap memory allocation (which is automatic) in Haskell isn't tracked. And stack allocation could be a side effect, although I'm not sure it would be useful. Both of these change the overall state of the system.
So where is the line drawn for what is a side effect and what isn't? Is it simply at what's the most "useful"? Or is there a more theoretical foundation?
When reasoning about these things it has to be on a theoretic level and on language specification level and never on how it's actually done on hardware.
A programming language isn't really an actual implementation so unless you think about C og C++ that has memory allocation and syscall as a part of the language, the higher level languages where this is handled by the systems primitives it's not part of the language. If it isn't part of the language it cannot be a side effect.
Now an actual implementations machine code would never be pure since the way to pass arguments and to receive return values are to store in either registers or stack, both by mutation. Most of the concepts we use in all modern programming is translated down to arithmetic, flags, jumps, and memory access. Every CPU instruction except NOP mutates the machine. A program consisting of only NOP is not very useful.
Neither stack allocation nor heap allocation is something you can "do" or observe in Haskell. Therefore, it can't be counted as side effect. In a sense, the same goes for heating up the CPU, which is without doubt a recognizable physical effect of running pure Haskel code.
It so happens that certain implementations of Haskell on contemporary hardware and OS will allocate stack / heap in the course of running your code, but it is not observable from your code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With