From Mozilla site: https://developer.mozilla.org/en-US/docs/Web/API/Media_Streams_API
"A MediaStream consists of zero or more MediaStreamTrack objects, representing various audio or video tracks. Each MediaStreamTrack may have one or more channels. The channel represents the smallest unit of a media stream, such as an audio signal associated with a given speaker, like left or right in a stereo audio track."
That clarifies what a channel is.
Several recent RFCs (E.g. 8108) refer to the need to have multiple streams sent in one RTP session. Each stream is to have its own SSRC at the RTP level. In the RFC for Unified Plan too, the reference is always to a stream as the lowest level (not tracks or channels). In RFC 3550, the base RTP RFC, there is no reference to channel.
Is the RTP stream as referred in these RFCs, which suggest the stream as the lowest source of media, the same as channels as that term is used in WebRTC, and as referenced above? Is there a one-to-one mapping between channels of a track (WebRTC) and RTP stream with a SSRC?
A webcam, for example, generates a media stream, which can have a audio media track and a video media track, each track is transported in RTP packets using a separate SSRC, resulting in two SSRCs. Is that correct? Now what if there is a stereo webcam (or some such device with, lets say two microphones - channels?). Will this generate three RTP streams with three different unique SSRCs?
Is there a single RTP session for a five-tuple connection established after successful test of ICE candidates? Or can there be multiple RTP sessions over the same set of port-ip-UDP connection between peers?
Any document that clarifies this would be appreciated.
The RTCRtpStreamStats dictionary's ssrc property provides the Synchronization Source (SSRC), an integer which uniquely identifies the source of the RTP packets whose statistics are covered by the RTCStatsReport that includes this RTCRtpStreamStats dictionary.
To get the audio I am doing the following: var remoteStream = new MediaStream(); peerConnection. getReceivers().
The RTCPeerConnection method addTrack() adds a new media track to the set of tracks which will be transmitted to the other peer.> Note: Adding a track to a connection triggers renegotiation by firing a negotiationneeded event. See Starting negotiation in Signaling and video calling for details.
Using the MediaStream API Here we create the hasUserMedia() function which checks whether WebRTC is supported or not. Then we access the getUserMedia function where the second parameter is a callback that accept the stream coming from the user's device. Then we load our stream into the video element using window. URL.
That clarifies what a channel is.
Not quite. Only audio tracks have channels. Unless you use web audio to split up an audio MediaStreamTrack
into individual channels, the track is the lowest level with regards to peer connections. *
That's because multiple audio channels, much like the multiple frames of video, are part of the payload that gets encoded and decoded by codecs. Practically speaking you can use a web audio splitter on the receiver's MedaiStreamTrack to split up the audio-channels, provided they survived.
*) There's also data channels, but those are different, and have no relation to media streams and tracks.
Is the RTP stream ... the same as channels as that term is used in WebRTC, and as referenced above?
No. Roughly speaking, you can say:
But that's not the entire story, because of sender.replaceTrack(withTrack)
. In short, you can replace a track that's being sent with a different track anytime during a live call, without needing to renegotiate your connection. Importantly, the other side's receiver.track
does not change in this case, only its output does. This separates the pipe from the content that goes through it.
So on the sending side, it's more fair to say:
pc.getSenders()
)...whereas on the receiving side, it's simpler, and always true to say:
Makes sense?
In modern WebRTC, MediaStream
s are dumb containers—You may add or remove tracks from them as you please using stream.addTrack(track)
and stream.removeTrack(track)
—Also, RTCPeerConnection
deals solely with tracks. E.g.:
for (const track of stream.getTracks()) {
pc.addTrack(track, stream);
}
Is there a one-to-one mapping between
channels ofa track and RTP stream with a SSRC?
Between a MediaStreamTrack
and SSRC, yes.
A webcam, [...] can have a audio media track and a video media track, each track is transported in RTP packets using a separate SSRC, resulting in two SSRCs. Is that correct?
Yes in this case, because audio can never be bundled with video or vice versa.
Now what if there is a stereo webcam
No difference. A stereo audio track is still a single audio track (and a single RTP stream).
Or can there be multiple RTP sessions over the same set of port-ip-UDP connection between peers?
Not at the same time. But multiple tracks can share the same session, unless you use the non-default:
new RTCPeerConnection({bundlePolicy: 'max-compat'});
If you don't, or use any other mode, then same-kind tracks may be bundled into a single RTP session.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With