I'm looking to use the multiprocessing module for python to create one process that continually polls a webcam via opencv's python interface, sending any resulting images to a queue from which other processes can access them. However, I'm encountering a hang (python 2.7 on Ubuntu 12.04) whenever I try to do anything with the images retrieved by other processes from the queue. Here's a minimal example:
import multiprocessing
import cv
queue_from_cam = multiprocessing.Queue()
def cam_loop(queue_from_cam):
print 'initializing cam'
cam = cv.CaptureFromCAM(-1)
print 'querying frame'
img = cv.QueryFrame(cam)
print 'queueing image'
queue_from_cam.put(img)
print 'cam_loop done'
cam_process = multiprocessing.Process(target=cam_loop,args=(queue_from_cam,))
cam_process.start()
while queue_from_cam.empty():
pass
print 'getting image'
from_queue = queue_from_cam.get()
print 'saving image'
cv.SaveImage('temp.png',from_queue)
print 'image saved'
This code should run up to the print out of "saving image" but then hang. Any ideas how I can go about fixing this?
multiprocessing is a package that supports spawning processes using an API similar to the threading module. The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads.
If your code is performing a CPU bound task, such as decompressing gzip files, using the threading module will result in a slower execution time. For CPU bound tasks and truly parallel execution, we can use the multiprocessing module.
Python multiprocessing Pool can be used for parallel execution of a function across multiple input values, distributing the input data across processes (data parallelism). Below is a simple Python multiprocessing Pool example.
Python provides a mutual exclusion lock for use with processes via the multiprocessing. Lock class. An instance of the lock can be created and then acquired by processes before accessing a critical section, and released after the critical section.
The simplest approach is to use the newer cv2
module that's based on NumPy arrays. That way you don't have to mess with manual pickling. Here's the fix (I just changed 4 lines of code):
import multiprocessing
import cv2
queue_from_cam = multiprocessing.Queue()
def cam_loop(queue_from_cam):
print 'initializing cam'
cap = cv2.VideoCapture(0)
print 'querying frame'
hello, img = cap.read()
print 'queueing image'
queue_from_cam.put(img)
print 'cam_loop done'
cam_process = multiprocessing.Process(target=cam_loop,args=(queue_from_cam,))
cam_process.start()
while queue_from_cam.empty():
pass
print 'getting image'
from_queue = queue_from_cam.get()
print 'saving image'
cv2.imwrite('temp.png', from_queue)
print 'image saved'
It appears that the solution was to convert the opencv iplimage object to string, then pickle it before adding it to the queue:
import multiprocessing
import cv
import Image
import pickle
import time
queue_from_cam = multiprocessing.Queue()
def cam_loop(queue_from_cam):
print 'initializing cam'
cam = cv.CaptureFromCAM(-1)
print 'querying frame'
img = cv.QueryFrame(cam)
print 'converting image'
pimg = img.tostring()
print 'pickling image'
pimg2 = pickle.dumps(pimg,-1)
print 'queueing image'
queue_from_cam.put([pimg2,cv.GetSize(img)])
print 'cam_loop done'
cam_process = multiprocessing.Process(target=cam_loop,args=(queue_from_cam,))
cam_process.start()
while queue_from_cam.empty():
pass
print 'getting pickled image'
from_queue = queue_from_cam.get()
print 'unpickling image'
pimg = pickle.loads(from_queue[0])
print 'unconverting image'
cv_im = cv.CreateImageHeader(from_queue[1], cv.IPL_DEPTH_8U, 3)
cv.SetData(cv_im, pimg)
print 'saving image'
cv.SaveImage('temp.png',cv_im)
print 'image saved'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With