The Prelude shows examples for take
and drop
with negative arguments:
take (-1) [1,2] == []
drop (-1) [1,2] == [1,2]
Why are these defined the way they are, when e.g. x !! (-1)
does the "safer" thing and crashes? It seems like a hackish and very un-Haskell-like way to make these functions total, even when the argument doesn't make sense. Is there some greater design philosophy behind this that I'm not seeing? Is this behavior guaranteed by the standard, or is this just how GHC decided to implement it?
There would be mainly one good reason to make take
partial: it could guarantee that the result list, if there is one, has always the requested number of elements.
Now, take
already violates this in the other direction: when you try to take more elements than there are in the list, is simply takes as many as there are, i.e. fewer than requested. Perhaps not the most elegant thing to do, but in practice this tends to work out quite usefully.
The main invariant for take
is combined with drop
:
take n xs ++ drop n xs ≡ xs
and that holds true even if n
is negative.
A good reason not to check the length of the list is that it makes the functions perform nicely on lazy infinite lists: for instance,
take hugeNum [1..] ++ 0 : drop hugeNum [1..]
will immediately give 1
as the first result element. This would not be possible if take
and drop
first had to check whether there are enough elements in the input.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With