The title is tells already enough, but citing Wikipedia:
The purpose of ridge detection is usually to capture the major axis of symmetry of an elongated object,[citation needed] whereas the purpose of edge detection is usually to capture the boundary of the object. However, some literature on edge detection erroneously[citation needed] includes the notion of ridges into the concept of edges, which confuses the situation.
But the difference is still not clear to me. Can you help (eventually with some example)?
Usually, if we do an edge detection on a ridge area we will get a double line, one from each side of the ridge.
This answer is based on this answer, which gives a much more detailed explanation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With