On the official sources did not find the information ... There are many examples of how to work with the Camera API, but how to work with Camera2API anything ... a couple of discussions at Stake and all ... there is a similar question to me, but I have a problem not solved. ..
link to the same quetion : Android camera2 face recognition
I took the example of Google's API for Camera 2.
That's what I did, but I did not detect a face :
I added events
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try {
// ---->> Установка модуля распознания лица
mPreviewRequestBuilder.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CameraMetadata.STATISTICS_FACE_DETECT_MODE_FULL);
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
System.out.println("Failed строка 757");
}
}, null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback() {
@Override
public void onCaptureProgressed(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull CaptureResult partialResult) {
process(partialResult);
}
@Override
public void onCaptureCompleted(@NonNull CameraCaptureSession session,
@NonNull CaptureRequest request,
@NonNull TotalCaptureResult result) {
process(result);
}
private void process(CaptureResult result) {
---> //здесь проверяю получает ли он массив лиц или нет + непонятный мод
Integer mode = result.get(CaptureResult.STATISTICS_FACE_DETECT_MODE);
Face[] faces = result.get(CaptureResult.STATISTICS_FACES);
if(faces != null && mode != null)
System.out.println("tagDDDDDDDDDDDDDDDDDDDDDDDD" + "faces : " +
faces.length + " , mode : " + mode);
switch (mState) {
case STATE_PREVIEW: {
// We have nothing to do when the camera preview is working normally.
// Here i set Face Detection
mPreviewRequestBuilder.set(CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CameraMetadata.STATISTICS_FACE_DETECT_MODE_FULL);
break;
}
here i am checking max count of faces that camera can get
private void setUpCameraOutputs(int width, int height) {
CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
max_count = characteristics.get(
CameraCharacteristics.STATISTICS_INFO_MAX_FACE_COUNT);
modes = characteristics.get(
CameraCharacteristics.STATISTICS_INFO_AVAILABLE_FACE_DETECT_MODES);
System.out.println("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! max_count " + max_count + " modes " + modes);
** Output is: **
I/System.out: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! max_count 16 modes [I@3e2907e8
** And this is what a log prints **
03-08 18:34:07.018 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.048 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.078 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.118 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.148 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.178 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.218 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.258 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.288 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.308 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
03-08 18:34:07.348 7405-7438/com.example.android.camera2basic I/System.out: tagDDDDDDDDDDDDDDDDDDDDDDDDfaces : 0 , mode : 1
Why it isn't reterning face? If someone have a correct working exsample, give please a link. How i can make face detection with camera2API. Throughout the week, I can not understand((
Eigenface-Based:- Eigenface based algorithm used for Face Recognition, and it is a method for efficiently representing faces using Principal Component Analysis.
The OpenCV method is a common method in face detection. It firstly extracts the feature images into a large sample set by extracting the face Haar features in the image and then uses the AdaBoost algorithm as the face detector.
Face recognition and Face detection using the OpenCV. The face recognition is a technique to identify or verify the face from the digital images or video frame. A human can quickly identify the faces without much effort. It is an effortless task for us, but it is a difficult task for a computer.
With play services 8.3, Google introduced Mobile Vision APIs. It includes an easy to use API called Face API that detects human faces in images and videos. Do not confuse this with face recognition as Face API currently supports detection only.
Detecting a face
When the API detects a human face, it is returned as a Face object. The Face object provides the spatial data for the face so you can, for example, draw bounding rectangles around a face, or, if you use landmarks on the face, you can add features to the face in the correct place, such as giving a person a new hat.
It also comes with cool methods like whether the user is smiling :) or is he winking ;) etc to name a few.
Check out the documentation and reference to learn more.
Hope this helps :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With