I have made a puzzle where the player slides blocks around to goals - the rules are fairly simple:
I think that's the rules covered. Here are some screenshots:
Here the player must move blocks so that they must hit each other in order to solve the puzzle.
The puzzle in it's near-solved state. Note how the block has hit another block and stopped
Here is another puzzle which has the push block mechanic included:
If we slide the top right block downward, the following happens:
As you can see, the block has been moved to the left when it hit the arrow block, and stopped on top of the wood block.
I would like to write an AI solution that solves these puzzles - I was thinking that it would be some kind of depth-first search, but I don't have much idea of where to start. Any pointers in making this happen would be a great thing!
Your problem is a classical instance of State-Space search problem. There are different algorithms that can be used based on the characteristics of your particular instance.
Regardless of your particular instance you need to have defined four components:
Since your problem can be defined in terms of these four components your problem can be solved with one of the following algorithms:
To evaluate which algorithm is the most appropriate we can consider these factors:
If we consider the BFS strategy we can see that it is complete since it systematically explores the state space by levels, it is optimal only if the cost function does not decrease when the depth of a state increases (this is our case, since all the actions have a constant cost). Now it comes the bad news: assume that the expansion of each node can give at most states, and the first solution is at depth , then you will need to store and expand at most states.
In the case of DFS we must consider that it can be stuck searching in a path for a lot of time (potentially infinite) when a different choice could lead to a solution somewhere near. So when the state space is infinite this algorithm is neither complete nor optimal. If we consider as the maximum depth of the state space, we will get a space complexity of at most , whereas the time complexity remains exponential: .
So, what can we do? We can mix these two strategies and use the Iterative Deepening depth first search. In this search we will run iteratively a DFS limiting the maximum depth starting from 0 to a maximum depth level. This method has the advantages of both search strategies: space complexity, where is the depth of the first optimal solution, time (we can't do better), it is complete (it will find a solution because it iteratively explores all states by levels) and it is optimal (assuming the cost function is nondecreasing with path length) for the same reasons BFS is optimal.
Reference: Artificial intelligence: a modern approach
Note: obviously there exist other uninformed strategies, but as stated on the book IDDFS is a good choice for uninformed search problems where you don't have additional information on the search space. Refer to the book for other types of search strategies, for example informed search, were have an idea of how far is a goal state from the current state.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With