What are the fundamental differences between Media Source Extensions and WebRTC?
If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment?
Where does Media Source Extensions fit into things? I have limited knowledge but have seen examples where developers are running adaptive streaming. Does MSE only deal with streams from your server?
Help would be much appreciated.
Unfortunately, these new browser-related protocols are being designed and developed by W3C and IETF in rather unorganized manner, not completely technically driven, but reflecting battles between Apple, Google and Microsoft, all trying to standardize their own technologies. Similarly, different browsers choose to adopt only certain standards or parts of standards which makes developer life extremely hard.
I have implemented both Media Source Extensions and WebRTC, so I think I can answer your question:
Media Source Extensions is just a player inside the browser.
You create a MediaSource object
https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
and assign it to your video element like this
video.src = URL.createObjectURL(mediaSource);
Then your javascript code can fetch media segments from somewhere (your server or webserver) and supply to SourceBuffer attached to MediaSource, for playback.
WebRTC is not just a player, it is also a capture, encoding and sending mechanism. So it is a player too, and you use it a little differently from Media Source Extensions. Here you create another object: MediaStream object
https://developer.mozilla.org/en-US/docs/Web/API/MediaStream
and assign it to your video element like this
video.srcObject = URL.createObjectURL(mediaStream);
Notice that in this case the mediaStream object is not created directly by yourself, but supplied to you by WebRTC APIs such as getUserMedia.
So, to summarize, in both cases you use video element to play, but with Media Source Extensions you have to supply media segments by yourself, while with WebRTC you use WebRTC API to supply media. And, once again, with WebRTC you can also capture user's webcam, encode it and send to another browser to play, enabling p2p video chat, for example.
Media Source Extensions browsers adoption: http://caniuse.com/#feat=mediasource
WebRTC browsers adoption: http://iswebrtcreadyyet.com/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With