I am currently developing a project for my studies where I have to fetch a webcam stream, detect some objects and put some additional information on this stream. This is all done on the server side.
Now I also have to provide the modified image of the stream to the clients. The clients just open a HTML file with the following content:
<html>
<head>
<title></title>
</head>
<body>
<h1>It works!</h1>
<video width="320" height="240" src="http://127.0.0.1:4711/videostream" type="video/quicktime" autoplay controls>
Your browser does not support the video tag.
</video>
</body>
</html>
This will result in a HTTP request on the server for /videostream. To handle this request on the server side I will use Boost 1.56.
Currently each frame of my webcam stream is of type IplImage. Do I have to convert the IplImage into a video MIME-Typespecific format?
I have tried to figure it out myself, how the whole thing is working, but I couldn't get it. I used Wireshark to analyze the communication, but it doesn't make sense. For testing purpose I have uploaded a video to my webspace and open the above file locally. The src of the video was the address of my webserver. First there is the TCP handshake stuff followed by this message:
HTTP 765 GET /MOV_4198.MOV HTTP/1.1
Followed the following message (it contains connection: Keep-Alive in the HTTP part):
HTTP 279 HTTP/1.1 304 Not Modified
Afterwards only TCP ACK and SYN folow, but no data. See the following picture: see picture
Where and how are the real data of the video sent? What have I missed here?
Would be great if you could provide me some information about the connection between the Browser (video-Tag) and the C++ socket connection.
Thank you, Stefan
I want to share my made experiences - maybe it will help others too. To get my stream from the webcam I used OpenCV 2.4.9 and as protocol I used mjpeg streaming protocol (see also MJPEG over HTTP) - thanks to @berak - he mentioned MJPEG in his comment under my question-post.
The following code just gives an overview - I do not go into threading details. Since this is a students project and we are using GitHub, you can find the whole source code here on GitHub - project Swank Rats I want to mention here, that I am not a C++, OpenCV or Boost guru. This project is the first time I use all three of them.
Do something like this (full code with threading and so search for WebcamService in the repo)
cv::VideoCapture capture();
cv::Mat frame;
while (1) {
if (!capture.isOpened()) {
break; //do some logging here or something else - webcam not available
}
//Create image frames from capture
capture >> frame;
if (!frame.empty()) {
//do something with your image (e.g. provide it)
lastImage = frame.clone();
}
}
Well I don't go into detail how you create a HTTP server with C++. There is a nice example provided by Boost for C++11. I have copied this code and adapted it to my needs. You can find the source code of my implementation in the repo mentioned above. The code is currently located at infrastructure / networking / videostreaming.
There is no need to use FFMPEG, GStreamer or something simular. You can create a in-memory JPEG using OpenCV like this (see code of StreamResponseHandler):
cv::Mat image = webcamService->GetLastImage();
// encode mat to jpg and copy it to content
std::vector<uchar> buf;
cv::imencode(".jpg", image, buf, std::vector<int>());
std::string content(buf.begin(), buf.end()); //this must be sent to the client
Thanks to @codeDr for his post here.
The content variable represents the image in bytes, which will be sent to the client. You have to follow the protocol of MJPEG.
Something like this is enough (as mentioned here)
<html>
<body>
<h1> Test for simple Webcam Live streaming </h1>
<img src="http://127.0.0.1:4711/videostream">
</body>
</html>
You have to change the IP, port and so on to your server connection.
I hope this helps.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With