I am trying to make a camera app that detects faces using the Google mobile vision API with a custom camera instance, NOT the same "CameraSource" in the Google API as I am also processing the frames to detect colors too and with Camerasource I am not allowed to get the camera frames.
After searching for this issue, the only results I've found are about using mobile vision with it's CameraSource, and not with any custom camera1 API. I've tried to override the frame processing, then do the detection on the outputted pics like here:
camera.setPreviewCallback(new Camera.PreviewCallback() {
@Override
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("onPreviewFrame", "" + data.length);
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 20, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
detector = new FaceDetector.Builder(getApplicationContext())
.setTrackingEnabled(true)
.setClassificationType(FaceDetector.ALL_LANDMARKS)
.setMode(FaceDetector.FAST_MODE)
.build();
detector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (detector.isOperational()) {
frame = new Frame.Builder().setBitmap(bmp).build();
mFaces = detector.detect(frame);
// detector.release();
}
}
});
So is there any way that I can link mobile vision with my camera instance for the sake of frame processing and to detect faces with it? You can see what I've done so far here: https://github.com/etman55/FaceDetectionSampleApp
**NEW UPDATE
After finding an open source file for the CameraSource class I solved most of my problems, but now when trying to detect faces the detector receives the frames correctly but it can't detect anything >> you can see my last commit in the github repo.
I can provide you with some very useful tips.
Building a new FaceDetector for each frame the camera provides is very bad idea, and also unnecessary. You only have to start it once, outside the camera frames receiver.
It is not necessary to get the YUV_420_SP (or NV21) frames, then convert it to YUV instance, then convert it to Bitmap, then create a Frame.Builder() with the Bitmap. If you take a look at the Frame.Builder Documentation you can see that it allows NV21 directly from Camera Preview. Like this:
@override public void onPreviewFrame(byte[] data, Camera camera) {detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21));}
And the Kotin version:
import com.google.android.gms.vision.Frame as GoogleVisionFrame
import io.fotoapparat.preview.Frame as FotoapparatFrame
fun recogniseFrame(frame: FotoapparatFrame) = detector.detect(buildDetectorFrame(frame))
.asSequence()
.firstOrNull { it.displayValue.isNotEmpty() }
?.displayValue
private fun buildDetectorFrame(frame: FotoapparatFrame) =
GoogleVisionFrame.Builder()
.setRotation(frame.rotation.toGoogleVisionRotation())
.setImageData(
ByteBuffer.wrap(frame.image),
frame.size.width,
frame.size.height,
ImageFormat.NV21
).build()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With