Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does this Genetic Algorithm stagnate?

Roger Alsing wrote an Evolutionary Algorithm for recreating the Mona Lisa using C#. His algorithm is simple:

  1. Generation a random population of size two.
  2. Replace the least-fit individual with a clone of the fittest.
  3. Mutate one of the individuals.
  4. Go to step 2

There is a Java Evolutionary Algorithm framework called Watchmaker. The author reimplemented the Mona Lisa problem using a genuine Genetic Algorithm: http://watchmaker.uncommons.org/examples/monalisa.php

It starts out well enough, but within 30 minutes the Watchmaker implementation stagnates with a poor approximation whereas Roger's implementation looks close to complete. I tried playing around with the settings but it didn't help much. Why is Watchmaker's implementation so much slower than Roger's and why does it stagnate?

Links:

  • Roger's source-code
  • Watchmaker's source-code
like image 694
Gili Avatar asked Sep 26 '11 03:09

Gili


1 Answers

I've studied this problem over the past month and made some interesting discoveries:

  1. Using opaque polygons improves rendering performance (and by extension the performance of the fitness function) by an order of magnitude.
  2. In all things, favor many small changes over drastic large changes...
  3. When adding a new polygon, give it a size of 1 pixel instead of assigning it vertexes with random coordinates. This improves its chances of survival.
  4. When adding a new vertex, instead of dropping it into a random position give it the same position as an existing vertex in the polygon. It won't modify the polygon in any noticeable way, but it will open the door for the "move vertex" mutation to move more vertexes than it could before.
  5. When moving vertexes, favor many small moves (3-10 pixels at a time) instead of trying to span the entire canvas.
  6. If you're going to damp mutations over time, damp the amount of change as opposed to the probability of change.
  7. The effects of damping are minimal. It turns out that if you've followed the above steps (favor small changes) there should be no real need to damp.
  8. Don't use Crossover mutation. It introduces a high mutation rate which is great early on but very quickly high mutation becomes a liability because an image that is mostly converged will reject all but small mutations.
  9. Don't scale the image in the fitness evaluator function. This one took me a while to figure out. If your input image is 200x200 but the fitness evaluator scales the image down to 100x100 before generating a fitness score it will result in candidate solutions containing steaks/lines of errors that are invisible to the fitness function but are clearly wrong to the end-user. The fitness function should process the entire image or not at all. A better solution is to scale the target image across-the-board so your fitness function is processing 100% of the pixels. If 100x100 is too small to display on the screen you simply up-scale the image. Now the user can see the image clearly and the fitness function isn't missing a thing.
  10. Prevent the creation of self-intersecting polygons. They never yield good results so we can substantially speed up the algorithm by preventing mutations from creating them. Implementing the check for self-intersecting polygons was a pain in the ass but it was worth the trouble in the end.
  11. I've modified the fitness score to remove hidden polygons (purely for performance reasons):

    fitness += candidate.size();

    This means that the fitness score will never hit zero.

  12. I've increased the maximum number of polygons from 50 to 65535.

    When I first tried running Watchmaker's Mona Lisa example it would run for days and not look anything close to the target image. Roger's algorithm was better but still stagnated after an hour. Using the new algorithm I managed to recreate the target image in less than 15 minutes. The fitness score reads 150,000 but to the naked eye the candidate looks almost identical to the original.

    I put together a diagnostics display that shows me the entire population evolving over time. It also tells me how many unique candidates are active in the population at any given time. A low number indicates a lack of variance. Either the population pressure is too high or the mutation rates are too low. In my experience, a decent population contains at least 50% unique candidates.

    I used this diagnostic display to tune the algorithm. Whenever the number of unique candidates was too low, I'd increase the mutation rate. Whenever the algorithm was stagnating too quickly I'd examine what was going on in the population. Very frequently I'd notice that the mutation amount was too high (colors or vertices moving too quickly).

    I'm glad I spent the past month studying this problem. It's taught me a lot about the nature of GAs. It has a lot more to do with design than code optimization. I've also discovered that it's extremely important to watch the entire population evolve in real time as opposed to only studying the fittest candidate. This allows you to discover fairly quickly which mutations are effective and whether your mutation rate is too low or high.

    I learned yet another important lesson about why it's extremely important to provide the fitness evaluator the exact same image as shown to the user.

    If you recall the original problem I reported was that the candidate image was being scaled down before evaluation, thereby allowing many pixels to avoid detection/correction. Yesterday I enabled anti-aliasing for the image shown to the user. I figured so long as the evaluator was seeing 100% of the pixels (no scaling going on) I should be safe, but it turns out this isn't enough.

Take a look at the following images:

Anti-aliasing enabled: Anti-aliasing enabled

Anti-aliasing disabled: Anti-aliasing disabled

They show the exact same candidates with anti-aliasing enabled and disabled. Notice how the anti-aliased version has "streaks" of errors across the face, similar to the problem I was seeing when the candidate was being scaled. It turns out that sometimes the candidates contains polygons that introduce errors into the image in the form of "streaks" (polygons rendered with sub-pixel precision). The interesting thing is that aliasing suppresses these errors so the evaluator function does not see it. Consequently, the users sees a whole bunch of errors which the fitness function will never fix. Sounds familiar?

In conclusion: you should always (always!) pass the fitness function the exact same image you display to the user. Better safe than sorry :)

Genetic Algorithms are a lot of fun. I encourage you to play with them yourself.

like image 142
Gili Avatar answered Nov 01 '22 10:11

Gili