Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

line simplification algorithm: Visvalingam vs Douglas-Peucker

I am trying to implement a line simplification algorithm. The main 2 algorithms I found are:

  • Ramer-Douglas-Peucker
  • Visvalingam-Whyat

Currently I am running a few simulations of them on Matlab in order to determine which answers my needs better.

The main goal for the algorithm is to simlipfy polygons in a map. My Input is a polygon\polyline and a threshold for mistake- epsilon.

I need the simplified polygon to be as close as possible to the original, and I do not have a requirment for number of points to keep.

I am having difficulties in comparing the two algorithms because: epsilon for RDP is a distance while epsilon for VW is an area. I need help understanding how to compare between the two algorithms. which can give me less points to keep within the threshold?

like image 404
Nir Agami Avatar asked Feb 09 '16 11:02

Nir Agami


1 Answers

I need the simlified polygon to be as close as possible to the original, and I do not have a requirment for number of points to keep.

DP method will give you better perceptible fit with lesser number of points - as its control parameter i.e the distance tolerance is captured by your requirement 'as close as possible'.

Having said that, the scale of the overall polygon or point-cloud relative to the pixel dimensions will have a bigger impact for smaller images. The exercise below can give you a 'feel' for how the two algorithms perform.

Here are some comparisons I ran between Visvalingam-Whyatt and Ramer-Douglas-Peucker for a few contours contained in what is around 100x100 bitmap originally. The images are screenshots of a ~ 10x zoom on the contours.

(you might want to download the images to appreciate the differences in performance)

Visvalingam-Whyatt method results : courtesy Zach's implementation on github ported to opencv data types. VSV simplification - with 0.1(white), 0.5(red), 1(magenta), 2(cyan) distance tolerances

VSV simplification - with 0.55(white), 0.4(red), 0.25(magenta), 0.15(cyan) percentage tolerances

VSV - point reductions t : % tolerance . this directly determines n = t*orig/100. n is the final number of points

orig 88: [n=47 for t=0.55], [n=34 for t=0.4], [n=20 for t=0.25], [n=12 for t=0.15]
orig 133: [n=72 for t=0.55], [n=52 for t=0.4], [n=32 for t=0.25], [n=18 for t=0.15]
orig 118: [n=63 for t=0.55], [n=46 for t=0.4], [n=28 for t=0.25], [n=16 for t=0.15]
orig 107: [n=57 for t=0.55], [n=41 for t=0.4], [n=25 for t=0.25], [n=15 for t=0.15]
orig 107: [n=57 for t=0.55], [n=41 for t=0.4], [n=25 for t=0.25], [n=15 for t=0.15]
orig 268: [n=146 for t=0.55], [n=106 for t=0.4], [n=65 for t=0.25], [n=39 for t=0.15]
orig 158: [n=85 for t=0.55], [n=62 for t=0.4], [n=38 for t=0.25], [n=22 for t=0.15]
orig 158: [n=85 for t=0.55], [n=62 for t=0.4], [n=38 for t=0.25], [n=22 for t=0.15]
orig 109: [n=58 for t=0.55], [n=42 for t=0.4], [n=26 for t=0.25], [n=15 for t=0.15]
orig 192: [n=104 for t=0.55], [n=75 for t=0.4], [n=46 for t=0.25], [n=27 for t=0.15]
orig 132: [n=71 for t=0.55], [n=51 for t=0.4], [n=31 for t=0.25], [n=18 for t=0.15]
orig 89: [n=47 for t=0.55], [n=34 for t=0.4], [n=21 for t=0.25], [n=12 for t=0.15]
orig 110: [n=59 for t=0.55], [n=42 for t=0.4], [n=26 for t=0.25], [n=15 for t=0.15]
orig 40: [n=20 for t=0.55], [n=14 for t=0.4], [n=8 for t=0.25], [n=4 for t=0.15]


DP method results using openCV approxPolyDP

DP simplification - with 0.1(white), 0.5(red), 1(magenta), 2(cyan) distance tolerances

Douglas-Peucker - point reductions t : pixel distance tolerance => no direct correlation to n - the final number of points

orig 88: [n=33 for t=0.1], [n=29 for t=0.5], [n=8 for t=1], [n=6 for t=2]
orig 133: [n=57 for t=0.1], [n=45 for t=0.5], [n=12 for t=1], [n=7 for t=2]
orig 118: [n=50 for t=0.1], [n=40 for t=0.5], [n=15 for t=1], [n=8 for t=2]
orig 107: [n=47 for t=0.1], [n=35 for t=0.5], [n=11 for t=1], [n=6 for t=2]
orig 107: [n=30 for t=0.1], [n=24 for t=0.5], [n=8 for t=1], [n=6 for t=2]
orig 268: [n=126 for t=0.1], [n=110 for t=0.5], [n=32 for t=1], [n=23 for t=2]
orig 158: [n=80 for t=0.1], [n=62 for t=0.5], [n=17 for t=1], [n=11 for t=2]
orig 158: [n=66 for t=0.1], [n=52 for t=0.5], [n=16 for t=1], [n=9 for t=2]
orig 109: [n=50 for t=0.1], [n=38 for t=0.5], [n=12 for t=1], [n=9 for t=2]
orig 192: [n=74 for t=0.1], [n=64 for t=0.5], [n=18 for t=1], [n=15 for t=2]
orig 132: [n=58 for t=0.1], [n=45 for t=0.5], [n=14 for t=1], [n=11 for t=2]
orig 89: [n=37 for t=0.1], [n=31 for t=0.5], [n=7 for t=1], [n=6 for t=2]
orig 110: [n=42 for t=0.1], [n=36 for t=0.5], [n=9 for t=1], [n=7 for t=2]
orig 40: [n=18 for t=0.1], [n=15 for t=0.5], [n=9 for t=1], [n=3 for t=2]


Summary :
  • Both the methods degrade gracefully.
  • VSV lets you specify the number of approximated points (in this implementation)
  • VSV in this implementation lets you pick multiple approximate polygons in one shot.
  • VSV retains a lot of pixel level convexity inflections even for large curvature sections - this could be undesirable in some cases.
  • DP follows the convexities better and smooths out inflections more but by sacrificing 'closeness'.
  • As a result, DP gives lesser number of points for the same percieved tolerance - which is anyway hard to compare between the two methods
  • DP give a better feel for tolerance as its a linear distance specification.

For my application, I prefer the control offered by VWV in this implementation over the possible efficiency of DP method.

But overall I feel openCVs's DP implementation gives a smoother perception.

Though my conclusions of VSV performance are based on Zach's implementation alone, I doubt if other implementations will give characteristically different polygon subsets.

[UPDATE1] Here is the comparison for a worst case scenario. DP is visually more acceptable.

enter image description here

like image 161
sith Avatar answered Sep 18 '22 14:09

sith