Just staring with Haskell. I want to define some elements to easily create morphisms between them.
a = "foo"
b = "bar"
g a = a --Problem is here
g b = a --Problem is here
Edit The problem is that haskell treats "a" in "g a" as a variable, but I actually want the value of the "a" defined above. Conceptually a want this
g (valueOf a) = a --Problem is here
g (valueOf b) = a --Problem is here
Where valueOf
is a magic function that would give me
g "foo" = a
g "bar" = a
Use
a = "foo"
b = "bar"
g x | x==a = a
| x==b = a
or
g "foo" = a
g "bar" = a
When you pattern match using a variable as in
g a = ...
the variable a
is a local variable, bound to the argument of the function. Even if a
was already defined globally, the code above will not use the value of the global a
to perform a comparison.
This semantics allows to reason locally about your code. Consider this code as an example:
f 2 x = 4
f c d = 0
Just by looking at the above definition you can see that f 2 3
is 4
. This is not changed if later on you add a definition for x
as follows:
x = 5
f 2 x = 4
f c d = 0
If the match semantics compared the second argument to 5
, now we would have f 2 3
equal to 0
. This would make reasoning about the function definitions harder, so most (if not all) functional languages such as Haskell use "local" variables for pattern matching, ignoring the possible global definitions for such variables.
A more adventurous alternative is to use view patterns:
{-# LANGUAGE ViewPatterns #-}
a = "foo"
b = "bar"
g ((==a) -> True) = ...
g ((==b) -> True) = ...
I am not a fan of this approach though, since I find standard patterns with guards to be clearer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With