Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

v4l2 Python - streaming video - mapping buffers

I'm working on a video capture script for Python in Raspbian (Raspberry Pi 2) and I'm having trouble using the Python bindings for v4l2, since I have no success on memory-maping the buffers.

What I need:

  • Capture video from a HD-WebCam (will be later 2 of them at the same time).
  • Be able to stream that video over WLAN (compromise between network load and processing speed).
  • In the future, be able to apply filters to the image before streaming (not obligatory).

What I've tried:

  • Use OpenCV (cv2). It's very easy to use, but it adds a lot of processing load as it converts the JPEG frames of the web cam to raw images and then I had to convert them back to JPEG before sending them over WLAN.
  • Read directly from '/dev/video0'. It would be great, as the webcam sends the frames already compressed and I could just read and send them, but it seems that my camera doesn't support that.
  • Use v4l2 bindings for Python. This is by now the most promising option, but I got stuck when I had to map the video buffers. I have found no way to overcome the "memory pointers / mappings" that this stuff seems to require.

What I've read:

  • This guide: http://www.jayrambhia.com/blog/capture-v4l2/
  • v4l2 documentation (some of it).
  • This example in C: https://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html
  • Some other examples in C/C++. I've found no examples which make direct use of v4l2 bindings on Python.

My questions:

  1. Is there a better way to do this? or if not...
  2. Could I tell OpenCV to not decompress the image? It would be nice to use OpenCV in order to apply future extensions. I found here that it's not allowed.
  3. How could I resolve the mapping step in Python? (any working example?)

Here is my (slowly) working example with OpenCV:

import cv2
import time

video = cv2.VideoCapture(0)

print 'Starting video-capture test...'

t0 = time.time()
for i in xrange(100):
    success, image = video.read()
    ret, jpeg = cv2.imencode('.jpg',image)

t1 = time.time()
t = ( t1 - t0 ) / 100.0
fps = 1.0 / t

print 'Test finished. ' + str(t) + ' sec. per img.'
print str( fps ) + ' fps reached'

video.release()

And here what I've done with v4l2:

FRAME_COUNT = 5

import v4l2
import fcntl
import mmap

def xioctl( fd, request, arg):

    r = 0

    cond = True
    while cond == True:
        r = fcntl.ioctl(fd, request, arg)
        cond = r == -1
        #cond = cond and errno == 4

    return r

class buffer_struct:
    start  = 0
    length = 0

# Open camera driver
fd = open('/dev/video1','r+b')

BUFTYPE = v4l2.V4L2_BUF_TYPE_VIDEO_CAPTURE
MEMTYPE = v4l2.V4L2_MEMORY_MMAP

# Set format
fmt = v4l2.v4l2_format()
fmt.type = BUFTYPE
fmt.fmt.pix.width       = 640
fmt.fmt.pix.height      = 480
fmt.fmt.pix.pixelformat = v4l2.V4L2_PIX_FMT_MJPEG
fmt.fmt.pix.field       = v4l2.V4L2_FIELD_NONE # progressive

xioctl(fd, v4l2.VIDIOC_S_FMT, fmt)

buffer_size = fmt.fmt.pix.sizeimage
print "buffer_size = " + str(buffer_size)

# Request buffers
req = v4l2.v4l2_requestbuffers()

req.count  = 4
req.type   = BUFTYPE
req.memory = MEMTYPE

xioctl(fd, v4l2.VIDIOC_REQBUFS, req)

if req.count < 2:
    print "req.count < 2"
    quit()

n_buffers = req.count

buffers = list()
for i in range(req.count):
    buffers.append( buffer_struct() )

# Initialize buffers. What should I do here? This doesn't work at all.
# I've tried with USRPTR (pointers) but I know no way for that in Python.
for i in range(n_buffers):

    buf = v4l2.v4l2_buffer()

    buf.type      = BUFTYPE
    buf.memory    = MEMTYPE
    buf.index     = i

    xioctl(fd, v4l2.VIDIOC_QUERYBUF, buf)

    buffers[i].length = buf.length
    buffers[i].start  = mmap.mmap(fd.fileno(), buf.length,
                                  flags  = mmap.PROT_READ,# | mmap.PROT_WRITE,
                                  prot   = mmap.MAP_SHARED,
                                  offset = buf.m.offset )

I will appreciate any help or advice. Thanks a lot!

like image 722
David Avatar asked Apr 05 '16 12:04

David


1 Answers

Just to add another option here that I just discovered, you are also able to use the V4L2 backend with OpenCV as well.

You simply need to specify it in the VideoCapture constructor. For example

cap = cv2.VideoCapture()

cap.open(0, apiPreference=cv2.CAP_V4L2)

cap.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc('M', 'J', 'P', 'G'))
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 960)
cap.set(cv2.CAP_PROP_FPS, 30.0)

When this is not explicitly specified, OpenCV will often use another camera API (e.g., gstreamer), which is often slower and more cumbersome. In this example I went from being limited to 4-5 FPS to up to 15 at 720p (using an Intel Atom Z8350).

And if you wish to use it with a ring buffer (or other memory-mapped buffer), take a look at the following resources:

https://github.com/Battleroid/seccam

https://github.com/bslatkin/ringbuffer

like image 181
Scott Mudge Avatar answered Sep 30 '22 20:09

Scott Mudge