Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What camera communication standards are used by Navigator.MediaDevices.getUserMedia()?

Does anyone know what communication standards are being used to detect camera hardware for use with getUserMedia?

I presume that it's MTP or something like that, although I expect that the implementation is different for each browser/OS, but I've been searching for two days and I can't find any solid information on this.

like image 868
Nanhydrin Avatar asked Jul 17 '18 09:07

Nanhydrin


People also ask

What is navigator MediaDevices getUserMedia?

The MediaDevices . getUserMedia() method prompts the user for permission to use a media input which produces a MediaStream with tracks containing the requested types of media.

How do I use Navigator MediaDevices getUserMedia?

When getUserMedia() is invoked, the browser asks for permission from the user to use the media inputs (camera or microphone or both) connected to the user device. Syntax : navigator. mediaDevices.

Is getUserMedia deprecated?

getUserMedia() Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes.

Does getUserMedia work on Safari?

getUserMedia is not supported for safari.


2 Answers

The webrtc.org library has a set of platform-specific glue modules which can be found here and also in the Chrome tree. On Windows this uses the MediaFoundation APIs (with a fallback to DirectShow), video4linux on Linux and AVFoundation on Mac

like image 73
Philipp Hancke Avatar answered Nov 09 '22 22:11

Philipp Hancke


I have long time searched for the answer of your question. At first I found this on w3.org WebRTC site:

This document defines a set of ECMAScript APIs in WebIDL to allow media to be sent to and received from another browser or device implementing the appropriate set of real-time protocols. This specification is being developed in conjunction with a protocol specification developed by the IETF RTCWEB group and an API specification to get access to local media devices developed by the Media Capture Task Force.

Then on the site "Media transport and use of RTP" I have found following informations:

5.2.4. Media Stream Identification:

WebRTC endpoints that implement the SDP bundle negotiation extension will use the SDP grouping framework 'mid' attribute to identify media streams. Such endpoints MUST implement the RTP MID header extension described in [I-D.ietf-mmusic-sdp-bundle-negotiation].

This header extension uses the [RFC5285] generic header extension framework, and so needs to be negotiated before it can be used.

12.2.1. Media Source Identification:

Each RTP packet stream is identified by a unique synchronisation source (SSRC) identifier. The SSRC identifier is carried in each of the RTP packets comprising a RTP packet stream, and is also used to identify that stream in the corresponding RTCP reports. The SSRC is chosen as discussed in Section 4.8. The first stage in demultiplexing RTP and RTCP packets received on a single transport layer flow at a WebRTC Endpoint is to separate the RTP packet streams based on their SSRC value; once that is done, additional demultiplexing steps can determine how and where to render the media.

RTP allows a mixer, or other RTP-layer middlebox, to combine encoded streams from multiple media sources to form a new encoded stream from a new media source (the mixer). The RTP packets in that new RTP packet stream can include a Contributing Source (CSRC) list, indicating which original SSRCs contributed to the combined source stream.

As described in Section 4.1, implementations need to support reception of RTP data packets containing a CSRC list and RTCP packets that relate to sources present in the CSRC list. The CSRC list can change on a packet-by-packet basis, depending on the mixing operation being performed.

Knowledge of what media sources contributed to a particular RTP packet can be important if the user interface indicates which participants are active in the session. Changes in the CSRC list included in packets needs to be exposed to the WebRTC application using some API, if the application is to be able to track changes in session participation. It is desirable to map CSRC values back into WebRTC MediaStream identities as they cross this API, to avoid exposing the SSRC/CSRC name space to WebRTC applications.

If the mixer-to-client audio level extension [RFC6465] is being used in the session (see Section 5.2.3), the information in the CSRC list is augmented by audio level information for each contributing source. It is desirable to expose this information to the WebRTC application using some API, after mapping the CSRC values to WebRTC MediaStream identities, so it can be exposed in the user interface.

Perkins, et al. Expires September 18, 2016 [Page 35]

Internet-Draft RTP for WebRTC March 2016

All Transports for WebRTC are listed on this site.

All documents from IETF RTCWEB group you can find on site "Real-Time Communication in WEB-browsers (rtcweb)".


For further information:
  • Media Capture (with links to all documents)
  • MediaStream API (all methods which are used in this API)
  • Real-time Transport Protocol (RTP)
  • Session Description Protocol (SDP)


My conclusion:
  1. Session Description Protocol (SDP)
  2. Real Time Transport Protocol (RTP) (may be too)
like image 44
Bharata Avatar answered Nov 09 '22 22:11

Bharata