Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to include & run both Image Tracking Configuration & World Tracking Configuration in AR app

I am experiencing the ARkit, and watched some tutorials online, I know how to do image tracking and world tracking, but don't know how to combine them into the same app when the camera track an image, the device would automatically run Image tracking configuration, but when the image is off camera, it runs the world tracking configuration.

I know I could use world tracking to track image as well, but image tracking configuration seems have a more stable tracking result.

Can anyone help?

like image 542
Tinloy Avatar asked Jun 21 '18 03:06

Tinloy


People also ask

What are include () and require () functions?

The include (or require ) statement takes all the text/code/markup that exists in the specified file and copies it into the file that uses the include statement. Including files is very useful when you want to include the same PHP, HTML, or text on multiple pages of a website.

HOW include HTML in PHP?

Use PRINT or ECHO With this method, you can include the HTML inside of the PHP tags. This is a good method to use for adding HTML to PHP if you only have a line or so to do. "<i>Print works too!

How do I join two PHP files?

If I had to guess, I would say check that you're using a : (colon) and not ; (semi-colon) at the end of the case 'show_file_1' and case 'show_file_2' lines. Show activity on this post. In PHP's most basic setup, you can use two independent files: one generates the form, the second handles the response.


1 Answers

You don't need to switch configurations. (Switching is possible, but you probably shouldn't.)

You can use image tracking within a world tracking configuration in ARKit 2 (iOS 12):

let configuration = ARWorldTrackingConfiguration()
configuration.detectionImages = // your images here
configuration.maximumNumberOfTrackedImages = 1 // or up to 4
// also set other options like plane detection and environment texture if you want
session.run(configuration)

Setting maximumNumberOfTrackedImages is what changes your configuration from image detection (tells you only when images first appear) to image tracking (tells you when images move, at 60 fps).

ARKit 2 provides two ways to do image tracking so that you can pick the one that suits your need:

ARWorldTrackingConfiguration with image tracking (shown above)

All the other features of world tracking (plane detection, environment texturing, hit testing, object detection, world map saving/restoring), with image tracking.

  • Pro: You get all those other features for use if your AR experience has things going on besides image-triggered content.

  • Pro: You can associate AR content with tracked images and have it interact with the world in other ways (like continuing to exist on a table even after the triggering image moves off camera).

  • Con: World tracking needs a stable, feature-rich environment, so it doesn't work well in moving reference frames like when the user is on a bus (motion sensor confusion), with plain or poorly lit environments (nothing to visually track), or with moving backgrounds like a thick crowd or ocean waves (visual tracking confusion).

  • Con: World tracking has CPU/power/thermal costs.

  • Con: Tracking images need known physical size to be placed correctly w.r.t. world tracking frame of reference.

ARImageTrackingConfiguration

Just image tracking: when an image is on camera, you get a live-updated anchor for it.

  • Pro: No world tracking limitations — you can detect images while on a bus, next to the ocean, in a poorly lit environment (so long as image itself is clearly visible), etc.
  • Pro: Reduced CPU/power/thermal cost.
  • Pro(?): Image physical size isn't important. Placing virtual content relative to an image anchor transform sizes that content proportional to the image. If your image is printed on a postcard you can have a tiny virtual dragon come out of it; if the same image is painted on the side of a building, your dragon is big enough to terrorize the city.
  • Con: When your image is on camera, you get an anchor for placing content relative to it... but when it's off camera, you have no frame of reference for placing content in the world. (Well, you can place content relative to the camera, but content glued to camera space isn't very "AR-y".) Virtual content placed relative to a detected image can't "escape" beyond image-based space to interact with the real world.
  • Con: You get none of the other features of a world tracking session (plane detection, environment texturing, hit testing, object detection, world map saving/restoring).

Which to choose?

It's all about the kind of experience you'd like to create. If you want to make a physical greeting card or poster or billboard appear to "come alive" with virtual content, and you don't care about that content otherwise interacting with the real world, use ARImageTrackingConfiguration. If you're making an AR experience that interacts with the world in other ways, and you want to add image-based content to it, use ARWorldTrackingConfiguration+detectionImages+maximumNumberOfTrackedImages.

What about switching between them?

Probably a bad idea.

Every time you call run(configuration) with a different configuration type than what the session was previously running, all of its tracking state is reset — it doesn't know where any of its anchors from before the switch are.

like image 58
rickster Avatar answered Sep 20 '22 14:09

rickster