I want to make a live face recognition system. My code so far detects a human face. I want to be able to process or scan the frames in the webcam to recognize the faces. I am using getUserMedia to load the webcam. I want to make the recognition process live instead of having to store the image for recognition. Following is the code I am using to start the webcam. I am a beginner so sorry for any confusions, any help is appreciated. Thank you!
    function startVideo() {
  document.body.append('Loaded')
  navigator.getUserMedia(
    { video: {} },
    stream => video.srcObject = stream,
    err => console.error(err)
  )
You didn't what format you want for your webcam-captured images to be delivered.  It's pretty  easy to deliver them into a <canvas /> element. 
<video /> element, thenHere's some example code, based on the "official" webrtc sample.
const video = document.querySelector('video');
const canvas = document.querySelector('canvas');
canvas.width = 480;
canvas.height = 360;
const button = document.querySelector('button');
See the drawImage() method call... that's what grabs the snap of the video preview element.
button.onclick = function() {
  /* set the canvas to the dimensions of the video feed */
  canvas.width = video.videoWidth;
  canvas.height = video.videoHeight;
  /* make the snapshot */
  canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
};
navigator.mediaDevices.getUserMedia( {audio: false, video: true })
.then(stream => video.srcObject = stream)
.catch(error => console.error(error); 
Obviously this is very simple. The parameter you pass to gUM is a MediaStreamConstraints object. It gives you a lot of control over the video (and audio) you want to capture.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With