I wish do several little projects experimenting with video and audio streaming from client to server and from client-server-multiples points, I have several questions:
1) I know than is not hard streaming from point to point avoiding server middle using webrtc, but is possible stream client to server using webrtc (exist some benefits doing this over websocket, or the benefit in webrtc is avoid middle communication)...what about streaming video and audio?
Streaming video or audio with websocket is really simple but I can't found any experience about streaming client-server using webrtc.
2) What about streaming video to multiples points, I know than recently there are some experiments with webrtc because originally this wasn't possible, now: the webrtc performance degrade when it is used for 1 to many points?...would be a better idea streaming to the server (maybe using webrtc) and then streaming to the several points using websockets??.
thanks so much and please don't be rude, my question is not subjetive or comparing both technologies, is more related to know which are the limitations and where can I use each one...thanks!.
I disagree with MarijnS95 because I don't think WebRTC is made especially for browsers. You can use it in any platform and in any server or client application outside a browser. That's the good part.
WebRTC is just a set of protocols that already existed, bundled to provide Real time Communications. It's called web because Google wanted to make it available and widespread using browsers (and that was a big step to spread the word)...
So, to answer your questions: WebRTC is better than WebSockets to stream media content, for thee obvious reasons.
So, the advantages are obvious, but yes, you can also use WebSockets to stream data.
I can't found any experience about streaming client-server using webrtc.
Well, WebRTC uses the standard protocols, and you can use standard servers to support it. Do a little search about Asterisk + WebRTC.
Regarding the multi-point question, the answer is the same. You have better results with WebRTC (going to the server or not). The problems with peer to peer conferencing are known, as you stated, and the solution for that is indeed to use a server to reduce the number of streams to one per client. In a ideal world, you would be using a MCU to make this job. That's how it's done.
Can be done with WebRTC, not with WebSockets. See Do websockets allow for p2p (browser to browser) communication?
WebRTC: Chrome + Firefox (+ Opera)
WebSockets: Chrome + Firefox + IE + Safari (+ Opera and some others)
WebRTC: UDP (SRTP), (also possible: TCP mode with TURN server) hopefully always end-to-end encrypted, but I'm not sure in case of TURN servers
WebSockets: TCP, can be secured via HTTPS/WSS, but not end-to-end between peers!
I don't know whether a clear answer is still requested for this question, but I wanted to do similar things.
I personally used Node.js in combination with the following plug-in for Node.js to enable WebRTC at the server side: node-webrtc. It's only supported for Linux and Mac OSX right now, but it allowed me to quickly set up a WebRTC server. You could then use the server to distribute your stream to other peers, either connected using WebSockets, WebRTC, or something else.
The source code is also freely available from the WebRTC webpage. So you can build a native application yourself that acts as a server if you want.
Yes it's possible...
try to use KURENTO with WEBRTC.
You can find 'one to many' call applications in their documentation, from client to server and server to many clients.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With