I am implementing minimax in Python 2.7.11 in a basic game of Pacman. Pacman is the maximizing agent, and one or more ghosts (depending on the test layout) is/are the minimizing agent(s).
I must implement minimax so that there can be potentially more than one minimizing agent, and so that it can create a tree of n plies (depth). Ply 1, for example, would be each ghost taking a turn minimizing the terminal state utilities of their possible moves, as well as pacman taking his turn maximizing what the ghosts have already minimized. Graphically, ply 1 looks like this:
If we had the following arbitrary utilities assigned to the green terminal states (left to right):
-10, 5, 8, 4, -4, 20, -7, 17
Pacman should return -4
and then move in that direction, creating an entirely new minimax tree based on that decision.
First, a list of variables and functions needed for my implementation to make sense:
# Stores everything about the current state of the game
gameState
# A globally defined depth that varies depending on the test cases.
# It could be as little as 1 or arbitrarily large
self.depth
# A locally defined depth that keeps track of how many plies deep I've gone in the tree
self.myDepth
# A function that assigns a numeric value as a utility for the current state
# How this is calculated is moot
self.evaluationFunction(gameState)
# Returns a list of legal actions for an agent
# agentIndex = 0 means Pacman, ghosts are >= 1
gameState.getLegalActions(agentIndex)
# Returns the successor game state after an agent takes an action
gameState.generateSuccessor(agentIndex, action)
# Returns the total number of agents in the game
gameState.getNumAgents()
# Returns whether or not the game state is a winning (terminal) state
gameState.isWin()
# Returns whether or not the game state is a losing (terminal) state
gameState.isLose()
This is my implementation:
"""
getAction takes a gameState and returns the optimal move for pacman,
assuming that the ghosts are optimal at minimizing his possibilities
"""
def getAction(self, gameState):
self.myDepth = 0
def miniMax(gameState):
if gameState.isWin() or gameState.isLose() or self.myDepth == self.depth:
return self.evaluationFunction(gameState)
numAgents = gameState.getNumAgents()
for i in range(0, numAgents, 1):
legalMoves = gameState.getLegalActions(i)
successors = [gameState.generateSuccessor(j, legalMoves[j]) for j, move
in enumerate(legalMoves)]
for successor in successors:
if i == 0:
return maxValue(successor, i)
else:
return minValue(successor, i)
def minValue(gameState, agentIndex):
minUtility = float('inf')
legalMoves = gameState.getLegalActions(agentIndex)
succesors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move
in enumerate(legalMoves)]
for successor in successors:
minUtility = min(minUtility, miniMax(successor))
return minUtility
def maxValue(gameState, agentIndex)
self.myDepth += 1
maxUtility = float('-inf')
legalMoves = gameState.getLegalActions(agentIndex)
successors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move
in enumerate(legalMoves)]
for successor in successors:
maxUtility = max(maxUtility, miniMax(successor))
return maxUtility
return miniMax(gameState)
Does anyone have any ideas why my code is doing this? I'm hoping there are a few Minimax/Artificial Intelligence experts out there that can identify my issues. Thanks in advance.
UPDATE: by instantiating my self.myDepth
value as 0
instead of 1
, I have irradicated the exception throwing issue. However, the overall incorrectness of my implementation still remains.
I have finally figured out a solution to my issue. The main problem was the fact that I was not referencing the depth
correctly to keep track of the plies. Instead of incrementing depth within the maxValue
method, it should instead be passed as a parameter to each function, and only incremented when being passed into maxValue
. There were several other logical errors such as not correctly referencing numAgents
, and also the fact that my miniMax
method was not returning an action. Here is my solution which turned out to work:
def getAction(self, gameState):
self.numAgents = gameState.getNumAgents()
self.myDepth = 0
self.action = Direction.STOP # Imported from a class that defines 5 directions
def miniMax(gameState, index, depth, action):
maxU = float('-inf')
legalMoves = gameState.getLegalActions(index)
for move in legalMoves:
tempU = maxU
successor = gameState.generateSuccessor(index, move)
maxU = minValue(successor, index + 1, depth)
if maxU > tempU:
action = move
return action
def maxValue(gameState, index, depth):
if gameState.isWin() or gameState.isLose() or depth == self.depth:
return self.evaluationFunction(gameState)
index %= (self.numAgents - 1)
maxU = float('-inf')
legalMoves = gameState.getLegalActions(index)
for move in legalMoves:
successor = gameState.generateSuccessor(index, move)
maxU = max(maxU, minValue(successor, index + 1, depth)
return maxU
def minValue(gameState, index, depth):
if gameState.isWin() or gameState.isLose() or depth == self.depth:
return self.evaluationFunction(gameState)
minU = float('inf')
legalMoves = gameState.getLegalActions(index)
if index + 1 == self.numAgents:
for move in legalMoves:
successor = gameState.generateSuccessor(index, move)
# Where depth is increased
minU = min(minU, maxValue(successor, index, depth + 1)
else:
for move in legalMoves:
successor = gameState.generateSuccessor(index, move)
minU = min(minU, minValue(successor, index + 1, depth)
return minU
return miniMax(gameState, self.index, self.myDepth, self.action)
And presto! Our final working multi-agent minimax implementation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With