I'm working on a robotic problem. The situation is something like this:
r.t. Simply put, I can find the position of each robot after deltat time.t. I need to display position of each robot for every deltat.Problem is actually very simple. Algo will be something like:
del_t = ;its given
initialPositions = ;its given
num_robots = ;its given
The following code executes for every del_t
robots = range(1,no_robots)
for R in robots:
for r in robots:
if distanceBetween(r,R) <= radius and r is not R:
acceleration_along_X[R] += xAcceleration( position(r), position(R) )
acceleration_along_Y[R] += yAcceleration( position(r), position(R) )
currVelocity_along_X[R] = prevVelocity_along_X[R] + acceleration_along_X[R] * del_t
currVelocity_along_Y[R] = prevVelocity_along_Y[R] + acceleration_along_Y[R] * del_t
curr_X_coordinate[R] = prev_X_coordinate[R] + currVelocity_along_X[R] * del_t
curr_Y_coordinate[R] = prev_Y_coordinate[R] + currVelocity_along_Y[R] * del_t
print 'Position of robot ' + str(R) + ' is (' + curr_X_coordinate[R] + ', ' + curr_Y_coordinate[R] +' ) \n'
prev_X_coordinate[R] = curr_X_coordinate[R]
prev_Y_coordinate[R] = curr_Y_coordinate[R]
prevVelocity_along_X[R] = currVelocity_along_X[R]
prevVelocity_along_Y[R] = currVelocity_along_Y[R]
Now I need to parallelize the algorithm and set up the Cartesian grid of MPI processes.
Now the problem turned interesting. Actually, it isn't as simple as I thought. After reading Unode's answer. I went on to apply his method two by parallelizing using multiprocessing.
This is the code. printPositionOfRobot is my serial algo. Basically, it is supposed to print the position of robot (with id robot_id) t=1,2,3,4,5,6,7,8,9,10. (Here del_t is taken as 1. num_iterations = 10. Each of the robot prints message like this: Robot8 : Position at t = 9 is (21.1051065245, -53.8757356694 )
There is bug in this code. t=0 locations of bots are given by position() for determining xAcceleration & yAcceleration. We need use the positions of previous iterations of all other particles.
from multiprocessing import Pool
import math
def printPositionOfRobot(robot_id):
radius = 3
del_t = 1
num_iterations = 10
no_robots = 10
prevVelocity_along_X = 0
prevVelocity_along_Y = 0
acceleration_along_X = 0
acceleration_along_Y = 0
(prev_X_coordinate,prev_Y_coordinate) = position(robot_id)#!!it should call initialPosition()
for i in range(1,num_iterations+1):
for r in range(no_robots):
if distanceBetween(r,robot_id) <= radius and r is not robot_id:
acceleration_along_X += xAcceleration( position(r), position(robot_id) ) #!! Problem !!
acceleration_along_Y += yAcceleration( position(r), position(robot_id) )#!! Problem !!
currVelocity_along_X = prevVelocity_along_X + acceleration_along_X * del_t
currVelocity_along_Y = prevVelocity_along_Y + acceleration_along_Y * del_t
curr_X_coordinate = prev_X_coordinate + currVelocity_along_X * del_t
curr_Y_coordinate = prev_Y_coordinate + currVelocity_along_Y * del_t
print 'Robot' + str(robot_id) + ' : Position at t = '+ str(i*del_t) +' is (' + str(curr_X_coordinate) + ', ' + str(curr_Y_coordinate) +' ) \n'
prev_X_coordinate = curr_X_coordinate
prev_Y_coordinate = curr_Y_coordinate
prevVelocity_along_X = currVelocity_along_X
prevVelocity_along_Y = currVelocity_along_Y
def xAcceleration((x1,y1),(x2,y2)):
s = distance((x1,y1),(x2,y2))
return 12*(x2-x1)*( pow(s,-15) - pow(s,-7) + 0.00548*s )
def yAcceleration((x1,y1),(x2,y2)):
s = distance((x1,y1),(x2,y2))
return 12*(y2-y1)*( pow(s,-15) - pow(s,-7) + 0.00548*s )
def distanceBetween(r,robot_id):
return distance(position(r), position(robot_id))
def distance((x1,y1),(x2,y2)):
return math.sqrt( (x2-x1)**2 + (y2-y1)**2 )
def Position(r): #!!name of this function should be initialPosition
k = [(-8.750000,6.495191) , (-7.500000,8.660254) , (-10.000000,0.000000) , (-8.750000,2.165064) , (-7.500000,4.330127) , (-6.250000,6.495191) , (-5.000000,8.660254) , (-10.000000,-4.330127) , (-8.750000,-2.165064) , (-7.500000,0.000000) ]
return k[r]
if __name__ == "__main__":
no_robots = 10 # Number of robots you need
p = Pool(no_robots) # Spawn a pool of processes (one per robot in this case)
p.map(printPositionOfRobot, range(no_robots))
the position function in acceleration_along_X & acceleration_along_Y should return the latest position of the robot.By latest I mean the position at the end of that previous iteration. So, each processes must inform other processes about its latest position. Until the latest position of the bot is know the process must wait.
Other way can be that all processes edit a global location.(I wonder if its possible, because each process have its own Virtual address space). If a process has not yet reached that iteration all other processes must wait.
Any ideas about how to go about it? I guess this is why MPI was suggested in the problem.
Note: Python's threads still run on the same processor. If you want to use the full range of processors of your machine you should use multiprocessing (python2.6+).
Using MPI will only bring you clear benefits if the computation is going to be spread over multiple computers.
There are two approaches to your problem. Since you have completely independent processes, you could simply launch the algorithm (passing a unique identifier for each robot) as many times as needed and let the operating system handle the concurrency.
1 - A short Linux shell script (or something equivalent in Windows BATCH language):
#!/bin/sh
for i in {0..99}; do
echo "Running $i"
python launch.py $i &
done
Note: the & after the launch.py this ensures that you actually launch all processes in consecutive way, rather than waiting for one to finish and then launch the next one.
2 - If instead you want to do it all in python, you can use the following simple parallelization approach:
from multiprocessing import Pool
def your_algorithm(robot_id):
print(robot_id)
if __name__ == "__main__":
robots = 100 # Number of robots you need
p = Pool(robots) # Spawn a pool of processes (one per robot in this case)
p.map(your_algorithm, range(robots))
The map function takes care of dispatching one independent operation per robot.
If you do require the use of MPI I suggest mpi4py.
As for information on what Cartesian grid stands for, try this
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With