Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to pipe live video frames from ffmpeg to PIL?

I need to use ffmpeg/avconv to pipe jpg frames to a python PIL (Pillow) Image object, using gst as an intermediary*. I've been searching everywhere for this answer without much luck. I think I'm close - but I'm stuck. Using Python 2.7

My ideal pipeline, launched from python, looks like this:

  1. ffmpeg/avconv (as h264 video)
  2. Piped ->
  3. gst-streamer (frames split into jpg)
  4. Piped ->
  5. Pil Image Object

I have the first few steps under control as a single command that writes .jpgs to disk as furiously fast as the hardware will allow.

That command looks something like this:

command = [
        "ffmpeg",
        "-f video4linux2",
        "-r 30",
        "-video_size 1280x720",
        "-pixel_format 'uyvy422'",
        "-i /dev/video0",
        "-vf fps=30",
        "-f H264",
        "-vcodec libx264",
        "-preset ultrafast",
        "pipe:1 -",
        "|", # Pipe to GST
        "gst-launch-1.0 fdsrc !",
        "video/x-h264,framerate=30/1,stream-format=byte-stream !",
        "decodebin ! videorate ! video/x-raw,framerate=30/1 !",
        "videoconvert !",
        "jpegenc quality=55 !",
        "multifilesink location=" + Utils.live_sync_path + "live_%04d.jpg"
      ]

This will successfully write frames to disk if ran with popen or os.system.

But instead of writing frames to disk, I want to capture the output in my subprocess pipe and read the frames, as they are written, in a file-like buffer that can then be read by PIL.

Something like this:

    import subprocess as sp
    import shlex
    import StringIO

    clean_cmd = shlex.split(" ".join(command))
    pipe = sp.Popen(clean_cmd, stdout = sp.PIPE, bufsize=10**8)

    while pipe:

        raw = pipe.stdout.read()
        buff = StringIO.StringIO()
        buff.write(raw)
        buff.seek(0)

        # Open or do something clever...
        im = Image.open(buff)
        im.show()

        pipe.flush()

This code doesn't work - I'm not even sure I can use "while pipe" in this way. I'm fairly new to using buffers and piping in this way.

I'm not sure how I would know that an image has been written to the pipe or when to read the 'next' image.

Any help would be greatly appreciated in understanding how to read the images from a pipe rather than to disk.

  • This is ultimately a Raspberry Pi 3 pipeline and in order to increase my frame rates I can't (A) read/write to/from disk or (B) use a frame by frame capture method - as opposed to running H246 video directly from the camera chip.
like image 637
Ryan Martin Avatar asked Jan 10 '17 23:01

Ryan Martin


1 Answers

I assume the ultimate goal is to handle a USB camera at a high frame rate on Linux, and the following addresses this question.

First, while a few USB cameras support H.264, the Linux driver for USB cameras (UVC driver) currently does not support stream-based payloads, which includes H.264, see "UVC Feature" table on the driver home page. User space tools like ffmpeg use the driver, so have the same limitations regarding what video format is used for the USB transfer.

The good news is that if a camera supports H.264, it almost certainly supports MJPEG, which is supported by the UVC driver and compresses well enough to support 1280x720 at 30 fps over USB 2.0. You can list the video formats supported by your camera using v4l2-ctl -d 0 --list-formats-ext. For a Microsoft Lifecam Cinema, e.g., 1280x720 is supported at only 10 fps for YUV 4:2:2 but at 30 fps for MJPEG.

For reading from the camera, I have good experience with OpenCV. In one of my projects, I have 24(!) Lifecams connected to a single Ubuntu 6-core i7 machine, which does real-time tracking of fruit flies using 320x240 at 7.5 fps per camera (and also saves an MJPEG AVI for each camera to have a record of the experiment). Since OpenCV directly uses the V4L2 APIs, it should be faster than a solution using ffmpeg, gst-streamer, and two pipes.

Bare bones (no error checking) code to read from the camera using OpenCV and create PIL images looks like this:

import cv2
from PIL import Image

cap = cv2.VideoCapture(0)   # /dev/video0
while True:
  ret, frame = cap.read()
  if not ret:
    break
  pil_img = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
  ...   # do something with PIL image

Final note: you likely need to build the v4l version of OpenCV to get compression (MJPEG), see this answer.

like image 104
Ulrich Stern Avatar answered Nov 12 '22 22:11

Ulrich Stern