Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Solving a puzzle game using AI

I have made a puzzle where the player slides blocks around to goals - the rules are fairly simple:

  1. Only one slider block may move at a time
  2. The object is to move the slider blocks into goal nodes - you only need to fill the goal nodes, not necessarily get all slider blocks into the goal nodes.
  3. If the slider block slides on ice, it will continue moving in that direction until it is stopped or moved
  4. If the slider block hits something solid (concrete, another block) it stops and can be moved again (not into the concrete, obviously)
  5. If the slider block slides onto wood, it stops on the wood and can be moved
  6. If the slider block slides onto a goal node, it can no longer be moved
  7. The block can be moved by certain blocks, for example, an arrow block pushes the block in that direction
  8. There are switch blocks which enable "doors", which can be moved on to in order to open, and to close, these "doors"
  9. There are button blocks which operate similarly to switches except that they must have a block on them to activate the "doors"
  10. When doors are closed, they act like concrete. When doors are open, they act like ice.

I think that's the rules covered. Here are some screenshots:

The beginning state of a puzzle

Here the player must move blocks so that they must hit each other in order to solve the puzzle.

The puzzle is three moves away from solving

The puzzle in it's near-solved state. Note how the block has hit another block and stopped

Here is another puzzle which has the push block mechanic included:

The blocks may be moved around here

If we slide the top right block downward, the following happens:

The block has been propelled leftward, to the wood block

As you can see, the block has been moved to the left when it hit the arrow block, and stopped on top of the wood block.

I would like to write an AI solution that solves these puzzles - I was thinking that it would be some kind of depth-first search, but I don't have much idea of where to start. Any pointers in making this happen would be a great thing!

like image 547
Sam P Avatar asked Mar 20 '23 19:03

Sam P


1 Answers

Your problem is a classical instance of State-Space search problem. There are different algorithms that can be used based on the characteristics of your particular instance.

Regardless of your particular instance you need to have defined four components:

  1. An initial state, in your problem the initial configuration
  2. A function that returns all the possible states reachable from any state of the space, in your problem the reachable states are all the possible configurations of the puzzle that can be obtained moving a slider. A state is said expanded when its reachable states are visited.
  3. A goal test, a function that determines whether the given state is a goal state, in your problem you would check if all goal blocks are filled,
  4. A cost function, that gives the cost of passing from a state to another, allowing your algorithm to choose the best action to perform. In your case, I think you can use a constant value of 1 for each action, and search for the minimum number of actions to be performed to reach one of the goal states.

Since your problem can be defined in terms of these four components your problem can be solved with one of the following algorithms:

  1. Breadth-first search (BFS): We start by testing whether the initial state is a goal state (obviously not for non-trivial problems), then we expand the initial state, test each of its reachable states, if not, expand each these states, and so on going for levels... It can be implemented using a queue.
  2. Depth-fisrt search (DFS): Starting with the initial state, test for goal, then expand a neighbour state, test for goal, the expand that state, and so on going down to the deepest levels of the state space. This can be implemented with a stack.

To evaluate which algorithm is the most appropriate we can consider these factors:

  1. Completeness: is it guaranteed that the algorithm will find a solution when it exists?
  2. Optimality: can the algorithm find an optimal solution?
  3. Time complexity: how long it will take?
  4. Space complexity: how much memory does it need?

If we consider the BFS strategy we can see that it is complete since it systematically explores the state space by levels, it is optimal only if the cost function does not decrease when the depth of a state increases (this is our case, since all the actions have a constant cost). Now it comes the bad news: assume that the expansion of each node can give at most states, and the first solution is at depth , then you will need to store and expand at most states.

In the case of DFS we must consider that it can be stuck searching in a path for a lot of time (potentially infinite) when a different choice could lead to a solution somewhere near. So when the state space is infinite this algorithm is neither complete nor optimal. If we consider as the maximum depth of the state space, we will get a space complexity of at most , whereas the time complexity remains exponential: .

So, what can we do? We can mix these two strategies and use the Iterative Deepening depth first search. In this search we will run iteratively a DFS limiting the maximum depth starting from 0 to a maximum depth level. This method has the advantages of both search strategies: space complexity, where is the depth of the first optimal solution, time (we can't do better), it is complete (it will find a solution because it iteratively explores all states by levels) and it is optimal (assuming the cost function is nondecreasing with path length) for the same reasons BFS is optimal.

Reference: Artificial intelligence: a modern approach

Note: obviously there exist other uninformed strategies, but as stated on the book IDDFS is a good choice for uninformed search problems where you don't have additional information on the search space. Refer to the book for other types of search strategies, for example informed search, were have an idea of how far is a goal state from the current state.

like image 146
Net_Raider Avatar answered May 01 '23 07:05

Net_Raider