I'm using WebRTC
to send video from a server to client browser (using the native WebRTC API
and an MCU WebRTC
server like Kurento).
Before sending it to clients each frame of the video contained metadata (like subtitles or any other applicative content). I'm looking for a way to send this metadata to the client such that it remains synchronized (to the time it is actually presented). In addition I would like to be able to access this data from the client side (by Javascript).
Some options I thought about:
timeupdate
event, but I don't konw if it will work for precision level of frame, and I'm not sure what it means in a live video as in WebRTC.TextTrack
. Then use the onenter
and onexit
to read it synchronizely: http://www.html5rocks.com/en/tutorials/track/basics/. It still requires precise timestamps, and I'm not sure how to know what are the timestamps and if Kurento pass them as-is.getstats
), and hope that the information provided by this API is precise.What is the best way to do that, and how to solve the problems I mentioned in either way?
EDIT: Precise synchronization (in resolution of no more than a single frame ) of metadata with the appropriate frame is required.
I suspect the amount of data per frame is fairly small. I would look at encoding it into a 2D barcode image and place it in each frame in a way so it is not removed by compression. Alternatively just encode timestamp like this.
Then on the player side you look at the image in a particular frame and get the data out or if it.
Ok, first lets get the video and audio using getUserMedia and lets make it raw data using
https://github.com/streamproc/MediaStreamRecorder
:
/*
*
* Video Streamer
*
*/
<script src="https://cdn.webrtc-experiment.com/MediaStreamRecorder.js"> </script>
<script>
// FIREFOX
var mediaConstraints = {
audio: !!navigator.mozGetUserMedia, // don't forget audio!
video: true // don't forget video!
};
navigator.getUserMedia(mediaConstraints, onMediaSuccess, onMediaError);
function onMediaSuccess(stream) {
var mediaRecorder = new MediaStreamRecorder(stream);
mediaRecorder.mimeType = 'video/webm';
mediaRecorder.ondataavailable = function (blob) {
// POST/PUT "Blob" using FormData/XHR2
};
mediaRecorder.start(3000);
}
function onMediaError(e) {
console.error('media error', e);
}
</script>
// CHROME
var mediaConstraints = {
audio: true,
video: true
};
navigator.getUserMedia(mediaConstraints, onMediaSuccess, onMediaError);
function onMediaSuccess(stream) {
var multiStreamRecorder = new MultiStreamRecorder(stream);
multiStreamRecorder.video = yourVideoElement; // to get maximum accuracy
multiStreamRecorder.audioChannels = 1;
multiStreamRecorder.ondataavailable = function (blobs) {
// blobs.audio
// blobs.video
};
multiStreamRecorder.start(3000);
}
function onMediaError(e) {
console.error('media error', e);
}
Now you can send the data through DataChannels and add your metadatas, in the receiver side:
/*
*
* Video Receiver
*
*/
var ms = new MediaSource();
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(ms);
ms.addEventListener('sourceopen', function(e) {
var sourceBuffer = ms.addSourceBuffer('video/webm; codecs="vorbis,vp8"');
sourceBuffer.appendBuffer(/* Video chunks here */);
}, false);
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With