Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why the difference in native camera resolution -vs- getUserMedia on iPad / iOS?

I've built this web app for iPads that uses getUserMedia and streams the resulting video through to a video element on a website. The model I'm using is an iPad Air with a rear-camera resolution of 1936x2592. Currently the constraints for the getUserMedia method are:

video: {
    facingMode: 'environment',
    width: { ideal: 1936 },
    height: { ideal: 2592 }
}

However, when I pull in the video it looked fairly grainy. Digging through the console log to grab the stream, video track, and then settings of that track, it appears that the video's resolution has been scaled down to 720x1280. Is there any particular reason for this? Is there a max resolution that webRTC/getUserMedia can handle?

like image 746
aschmelyun Avatar asked Oct 19 '18 21:10

aschmelyun


1 Answers

Edit - ImageCapture

If 60FPS video isn't a hard requirement and you have leway on compatibility you can poll ImageCapture to emulate a camera and receive a much clearer image from the camera.

You would have to check for clientside support and then potentially fallback on MediaCapture.

The API enables control over camera features such as zoom, brightness, contrast, ISO and white balance. Best of all, Image Capture allows you to access the full resolution capabilities of any available device camera or webcam. Previous techniques for taking photos on the Web have used video snapshots (MediaCapture rendered to a Canvas), which are lower resolution than that available for still images.

https://developers.google.com/web/updates/2016/12/imagecapture

and its polyfill:

https://github.com/GoogleChromeLabs/imagecapture-polyfill


MediaCapture

Bit of a long answer... and mostly learning from looking at AR Web and Native apps for the last few years.

If you have a camera which allows a 1920x1080, 1280x720, and 640x480 resolutions only, the browser implementation of Media Capture can emulate a 480x640 feed from the 1280x720. From testing (primarily Chrome) the browser typically scales 720 down to 640 and then crops the center. Sometimes when I have used virtual camera software I see Chrome has added artificial black padding around a non supported resolution. The client sees a success message and a feed of the right dimensions but a person would see a qualitative degradation. Because of this emulation you cannot guarantee the feed is correct or not scaled. However it will typically have the correct dimensions requested.

You can read about constraints here. It basically boils down to: Give me a resolution as close to x. Then the browser determines by its own implementation to reject the constraints and throw an error, get the resolution, or emulate the resolution.

More information of this design is detailed in the mediacapture specification. Especially:

The RTCPeerConnection is an interesting object because it acts simultaneously as both a sink and a source for over-the-network streams. As a sink, it has source transformational capabilities (e.g., lowering bit-rates, scaling-up / down resolutions, and adjusting frame-rates), and as a source it could have its own settings changed by a track source.

The main reason for this is allowing n clients to have access to the same media source but may require different resolutions, bit rate, etc, thus emulation/scaling/transforming attempts to solve this problem. A negative to this is that you never truly know what the source resolution is.

Thus to answer your specific question: Apple has determined within Safari what resolutions should be scaled where and when. If you are not specific enough you may encounter this grainy appearance. I have found if you use constraints with min, max, and exact you get a clearer iOS camera feed. If the resolution is not supported it will try and emulate it, or reject it.

like image 76
Marcus Avatar answered Nov 05 '22 06:11

Marcus