Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS11 ARKit: Can ARKit also capture the Texture of the user's face?

I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.

ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).

But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?

like image 896
FranticRock Avatar asked Nov 10 '17 14:11

FranticRock


People also ask

How are pictures texture mapped to object in ARKit?

ARKit generates environment textures by collecting camera imagery during the AR session. Because ARKit cannot see the scene in all directions, it uses machine learning to extrapolate a realistic environment from available imagery.

What can you do with ARKit?

Users can move around and view the virtual objects from any angle. Environmental understanding: the device can detect the size and location of different types of surfaces. This includes horizontal, vertical and angled surfaces like the floor, coffee tables and walls.

What is ARKit tracking?

ARKit recognizes notable features in the scene image, tracks differences in the positions of those features across video frames, and compares that information with motion sensing data. The result is a high-precision model of the device's position and motion.

Can you face track on iPhone?

Face tracking supports devices with Apple Neural Engine in iOS 14 and iPadOS 14 and requires a device with a TrueDepth camera on iOS 13 and iPadOS 13 and earlier. To run the sample app, set the run destination to an actual device; the Simulator doesn't support augmented reality.


1 Answers

I've put together a demo iOS app that shows how to accomplish this. The demo captures a face texture map in realtime, applying it back to a ARSCNFaceGeometry to create a textured 3D model of the user's face.

Below you can see the realtime textured 3D face model in the top left, overlaid on top of the AR front facing camera view:

Textured 3D face model in the top left overlaid on top the AR front facing camera view

The demo works by rendering an ARSCNFaceGeometry, however instead of rendering it normally, you instead render it in texture space while continuing to use the original vertex positions to determine where to sample from in the captured pixel data.

Here are links to the relevant parts of the implementation:

  • FaceTextureGenerator.swift — The main class for generating face textures. This sets up a Metal render pipeline to generate the texture.

  • faceTexture.metal — The vertex and fragment shaders used to generate the face texture. These operate in texture space.

Almost all the work is done in a metal render pass, so it easily runs in realtime.

I've also put together some notes covering the limitations of the demo


If you instead want a 2D image of the user's face, you can try doing the following:

  • Render the transformed ARSCNFaceGeometry to a 1-bit buffer to create an image mask. Basically you just want places where the face model appears to be white, while everything else should be black.

  • Apply the mask to the captured frame image.

This should give you an image with just the face (although you will likely need to crop the result)

like image 93
Matt Bierner Avatar answered Sep 19 '22 19:09

Matt Bierner