I noticed there is a Mutex structure in Haskell. I did not understand how it works as I am not a Haskell developer, but If every variable is immutable (as FP is advocating), why do we still need a mutex?
Indeed, all variables are immutable. But they can represent, for example, references to objects and there is a class of functions that allow you to describe the process of changing the contents of these references. If a similar process is in another thread, then there will be a problem.
You can say that Haskell is a language for modeling. And it's clean. But it allows you to model not pure calculations, but actual work on impure calculations makes runtime (or by FFI). And we need to design something like a mutex into our model for multithreaded programming.
ADDITION
I think, if you really want to understand why there is something like mutex in Haskell, first of all, you should understand how Haskell can have a function, for example, like readFile
which takes a file path and returns its content? Problem is that readFile
must be pure and impure, which is paradoxical. So, how in Haskell this paradox solved? Try to answer on this question and I believe you will understand more things by this way.
I'm not too sure where you're finding a structure called Mutex, but if you're talking about Data.Mutex, that's for the Fay language, which is a javascript-targetting language.
If you're talking about Control.Concurrent.Lock, that's what freestyle said which is that it's modelling a locking mechanism.
The more usual way of doing inter-process concurrency, though, uses what's called Software Transactional Memory and this uses the Control.Monad.STM module which uses a form of transactional memory that has a kind of automatic locking mechanism where most of the time you don't have to worry about manual locking. This is comparatively amazing when you consider manual locking.
Rich Hickey has been responsible for pushing this idea further into the mainstream by making an implementation of it in the Clojure language, but essentially this mechanism allows the application level programmer to not worry about the extreme pain of manual locking and synchronisation mechanisms. Reads are extremely fast because of immutability and writes get replayed automatically. Simon Peyton Jones has a paper on it and here's a link to a video of him talking about it at an o'reilly conference called "Transactional Memory for Concurrent Programming": https://www.youtube.com/watch?v=4caDLTfSa2Q
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With