I am a little confused with Hill Climbing algorithm. I want to "run" the algorithm until i found the first solution in that tree ( "a" is initial and h and k are final states ) and it says that the numbers near the states are the heuristic values. Here's the tree:
My question : i am trying to run hill climbing on the tree, so ok we start a-> f-> g and then what ??finish(without result) , but I read that hill climbing can't go back and make a new choice(example j or e) ? Is this right ? If i can go back then how ? i mean where we change our initial choice example we choose e instead of g or j instead of f
Sorry if my question is too simple .
In Hill-Climbing technique, starting at the base of a hill, we walk upwards until we reach the top of the hill. In other words, we start with initial state and we keep improving the solution until its optimal.
Simple Hill climbing:It examines the neighboring nodes one by one and selects the first neighboring node which optimizes the current cost as the next node. Algorithm for Simple Hill climbing : Evaluate the initial state. If it is a goal state then stop and return success.
Steepest-Ascent hill climbing The steepest-Ascent algorithm is a variation of the simple hill-climbing algorithm. This algorithm examines all the neighbouring nodes of the current state and selects one neighbour node which is closest to the goal state.
Hill climbing is neither complete nor optimal, has a time complexity of O(∞) but a space complexity of O(b). No special implementation data structure since hill climbing discards old nodes. Because of this "amnesy", hill climbing is a suboptimal search strategy and hill climbing is not complete.
A common way to avoid getting stuck in local maxima with Hill Climbing is to use random restarts. In your example if G is a local maxima, the algorithm would stop there and then pick another random node to restart from. So if J or C were picked (or possibly A, B, or D) you would find the global maxima in H or K. Rinse and repeat enough times and you'll find the global maxima or something close; depending on time/resource limitations and the problem space.
Note that Local Search like Hill Climbing isn't complete and can't guarantee to find the global maxima. The benefit, of course, is that it requires a fraction of the resources. In practice and applied to the right problems, it's a very effective solution.
You could try to use a technique called simulated annealing to prevent your search to get stuck on to local minimums. Essentially, in simulated annealing, there is a parameter T
that controls your likelihood to make a move to sub-optimal neighborhood states. If T
is high, you are more likely to make a sub-optimal move to a neighboring state and thereby might escape a local minimum when stuck there, which you wouldn't if you used normal hill climbing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With