My configuration:
ubuntu 16.04
opencv 3.3.1
gcc version 5.4.0 20160609
ffmpeg version 3.4.2-1~16.04.york0
and I built opencv with:
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D PYTHON_EXECUTABLE=$(which python) -D OPENCV_EXTRA_MODULES_PATH=/home/xxx/opencv_contrib/modules -D WITH_QT=ON -D WITH_OPENGL=ON -D WITH_IPP=ON -D WITH_OPENNI2=ON -D WITH_V4L=ON -D WITH_FFMPEG=ON -D WITH_GSTREAMER=OFF -D WITH_OPENMP=ON -D WITH_VTK=ON -D BUILD_opencv_java=OFF -D BUILD_opencv_python3=OFF -D WITH_CUDA=ON -D ENABLE_FAST_MATH=1 -D WITH_NVCUVID=ON -D CUDA_FAST_MATH=ON -D BUILD_opencv_cnn_3dobj=OFF -D FORCE_VTK=ON -D WITH_CUBLAS=ON -D CUDA_NVCC_FLAGS="-D_FORCE_INLINES" -D WITH_GDAL=ON -D WITH_XINE=ON -D BUILD_EXAMPLES=OFF -D BUILD_DOCS=ON -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D BUILD_opencv_dnn=OFF -D BUILD_PROTOBUF=OFF -D opencv_dnn_BUILD_TORCH_IMPORTER=OFF -D opencv_dnn_PERF_CAFFE=OFF -D opencv_dnn_PERF_CLCAFFE=OFF -DBUILD_opencv_dnn_modern=OFF -D CUDA_ARCH_BIN=6.1 ..
and use these python code to read and show:
import cv2
from com.xxx.cv.core.Image import Image
capture=cv2.VideoCapture("rtsp://192.168.10.184:554/mpeg4?username=xxx&password=yyy")
while True:
grabbed,content=capture.read()
if grabbed:
Image(content).show()
doSomething()
else:
print "nothing grabbed.."
Everytime, after reading about 50 frames,it will give an error like:
[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7
then nothing can be grabbed further,and the strange thing is:
1,comment doSomething() or
2,keep doSomething() and recording the stream from same IPCamera,then run
code against recorded video
both cases,code works fine,can anyone tell how to solve this problem?Thank in advance!
Perhaps FFmpeg is used in your case. Please try ffmpeg / ffplay utilities on your stream without OpenCV first and check for similar messages. Also provide information about used IP camera - some firmware versions can be buggy. We really can't help with fixing of networking issues. OpenCV doesn't work on this level. Sorry, something went wrong.
OpenCV doesn't decode H264 directly. It uses some 3rdparty backends. Perhaps FFmpeg is used in your case. Please try ffmpeg / ffplay utilities on your stream without OpenCV first and check for similar messages.
Try rebuilding OpenCV without FFMPEG support, but with GStreamer enabled. Then, ensure you have all of the GStreamer plugins installed (good, bad, ugly) and gstreamer-ffmpeg (if you're in Linux, not sure how to get these plugins/libraries in other OSes).
Let's first look at a simple sample program for reading RTSP
import cv2
cap=cv2.VideoCapture("rtsp://admin:[email protected]")
ret,frame = cap.read()
while ret:
ret,frame = cap.read()
cv2.imshow("frame",frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
cap.release()
The frame value is the image of each frame. But if you add the recognition operation for each frame in the while code block, such as adding Tensorflow to recognize the small animals in it, such an error will be reported and the while loop will be interrupted due to this exception.
[h264 @ 0x55abeda05080] left block unavailable for requested intra mode
[h264 @ 0x55abeda05080] error while decoding MB 0 14, bytestream 104435
It turns out that FFMPEG Lib does not support H264 videos in the rtsp protocol, so the solution is to write two different threads to process the images of each frame separately, and then another thread to process the images of each frame.
The idea is as follows: use a queue, adopt a first-in-first-out strategy, start receiving data in one thread, and process frame-by-frame data in another thread
Solution code show as below:
import cv2
import queue
import time
import threading
q=queue.Queue()
def Receive():
print("start Reveive")
cap = cv2.VideoCapture("rtsp://admin:[email protected]")
ret, frame = cap.read()
q.put(frame)
while ret:
ret, frame = cap.read()
q.put(frame)
def Display():
print("Start Displaying")
while True:
if q.empty() !=True:
frame=q.get()
cv2.imshow("frame1", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if __name__=='__main__':
p1=threading.Thread(target=Receive)
p2 = threading.Thread(target=Display)
p1.start()
p2.start()
Receive is used as a thread for receiving data, and Display is displayed as a simple process.
I am using a hikvision ip poe camera with opencv 3.4 and python3 on a system having ubuntu 16.04. Camera is streaming in h264 format.
Using RTSP i am streaming from camera using videocapture of the opencv and sometimes i am having the same problems "[h264 @ 0x8f915e0] error while decoding MB 43 20, bytestream -4"
This problem is created when you use the captured frames in the further processing and you create a delay in the pipeline while the rtsp is still streaming.
The solution is to put the capture on a different thread and the frames you use on another thread.
You will have something like in the same process using multithreading with python:
#thread1
global frame
frame = None
cap = cv2.VideoCapture("rtsp://bla:[email protected]")
while True:
ret,frame = cap.read()
#thread2
cv2.imshow("Current frame", frame)
cv2.waitKey(0)
# you can pass now the frame to your application for further processing
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With