To implement face detection expressed in this blog with Camera X and ML Kit, with custom overlay. That enables the shutter button only when the face is in the bounding box.
Expected result video and Starter source code with CameraX included
https://medium.com/onfido-tech/face-detection-and-tracking-on-android-using-ml-kit-part-1-fbee4200d174
Following the Android camera X code labs, I could capture images and video. Though ML Kit bounding box implementation requires a Graphic Overlay.
class OverlayPosition(var x: Float, var y: Float, var r: Float)
class OverlayView @JvmOverloads constructor(
context: Context?,
attrs: AttributeSet? = null,
defStyleAttr: Int = 0
) : View(context, attrs, defStyleAttr) {
private val paint: Paint = Paint()
private var holePaint: Paint = Paint()
private var bitmap: Bitmap? = null
private var layer: Canvas? = null
private var border: Paint = Paint()
//position of hole
var holePosition: OverlayPosition = OverlayPosition(0.0f, 0.0f, 0.0f)
set(value) {
field = value
//redraw
this.invalidate()
}
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
if (bitmap == null) {
configureBitmap()
}
//draw background
layer?.drawRect(0.0f, 0.0f, width.toFloat(), height.toFloat(), paint)
//draw hole
layer?.drawCircle((width / 2).toFloat(), (height / 4).toFloat(), 400f, border)
layer?.drawCircle((width / 2).toFloat(), (height / 4).toFloat(), 400f, holePaint)
//draw bitmap
canvas.drawBitmap(bitmap!!, 0.0f, 0.0f, paint);
}
private fun configureBitmap() {
//create bitmap and layer
bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
layer = Canvas(bitmap!!)
}
init {
//configure background color
val backgroundAlpha = 0.8
paint.color = ColorUtils.setAlphaComponent(context?.let {
ContextCompat.getColor(
it,
R.color.overlay
)
}!!, (255 * backgroundAlpha).toInt())
border.color = Color.parseColor("#FFFFFF")
border.strokeWidth = 30F
border.style = Paint.Style.STROKE
border.isAntiAlias = true
border.isDither = true
//configure hole color & mode
holePaint.color = ContextCompat.getColor(context, android.R.color.transparent)
holePaint.xfermode = PorterDuffXfermode(PorterDuff.Mode.CLEAR)
}
}
The CameraX team provides a GitHub code sample on how to detect objects with ML Kit and draw overlay on Preview. Please take a look and let me know if this works for you. The API is called MLKitAnalyzer.
In the layout file, we will define it as follows:
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent">
<FrameLayout
android:layout_width="0dp"
android:layout_height="0dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent">
<androidx.camera.view.PreviewView
android:id="@+id/viewFinder"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:scaleType="fillStart" />
<com.example.facedetection.utils.OvalOverlayView
android:id="@+id/oval_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</FrameLayout>
// other codes
</androidx.constraintlayout.widget.ConstraintLayout>
And here is the definition of the custom view Oval:
class OvalOverlayView(context: Context, attrs: AttributeSet?) : View(context, attrs) {
private val paint = Paint()
private val ovalRect = RectF()
private val blurPaint = Paint()
private val ovalPath = Path()
init {
paint.apply {
color = Color.RED
style = Paint.Style.STROKE
strokeWidth = 15f
}
blurPaint.apply {
isAntiAlias = true
color = Color.BLACK
style = Paint.Style.FILL
alpha = 128
}
}
fun getOvalRect() = ovalRect
override fun onDraw(canvas: Canvas?) {
super.onDraw(canvas)
val centerX = width / 2
val centerY = height / 2.8
val radiusX = width / 2.8
val radiusY = height / 3.4
ovalRect.set(
(centerX - radiusX).toFloat(),
(centerY - radiusY).toFloat(),
(centerX + radiusX).toFloat(),
(centerY + radiusY).toFloat()
)
// The following code adjusts the outer part of the oval shape to be slightly blurred and the inner part to be transparent.
ovalPath.addOval(ovalRect, Path.Direction.CCW)
if (Build.VERSION.SDK_INT >= 26) {
canvas?.clipOutPath(ovalPath)
} else {
@Suppress("DEPRECATION") canvas?.clipPath(ovalPath, Region.Op.DIFFERENCE)
}
canvas?.drawPaint(blurPaint)
}
}
Next, you pass the received image to the image analyzer (here I'm using MediaPipe, but ML Kit would be similar :)) ). After obtaining the facial landmarks, you can select four facial landmarks (such as left/right cheeks, forehead, and chin). If these four landmarks are within the oval shape (you can check using RectF.contains), it can be determined that the face is placed within the oval shape.
I have also developed a similar application, and you can refer to it in this repository.
This is my first answer on the forum, and hopefully it won't be considered a low-quality answer.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With