Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?

It's all in the title, but in the now-deprecated Android Camera API, there were two methods: Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle().

Now, with the current Camera2 API, it seems there is no equivalent to these in the docs. I'm assuming that this is because FOV angles are more complicated and nuanced than a simple horizontal and vertical value, but I can't find any information online about how to calculate the total field of view for an Android device using the newer Camera2 API.

like image 878
TonyTheJet Avatar asked Oct 10 '16 19:10

TonyTheJet


People also ask

What is Camera2 API Android?

Camera2 is the low-level Android camera package that replaces the deprecated Camera class. Camera2 provides in-depth controls for complex use cases, but requires you to manage device-specific configurations. For more information, see the Camera2 reference documentation.

What is camera API?

Google introduced the Camera2 API in Android 5.0 Lollipop as a successor to the original Camera API in order to better define how apps can interact with the individual cameras connected to your smartphone.

How do I know if my phone has Camera2 API?

Well, all you need to do is download a simple app called 'Camera2 API probe' from the Google Play Store and run it. The app gives detailed info about both the rear and front camera sensors of your Android phone. From that info, you can easily deduce whether your Android device supports Camera2 API or not.


1 Answers

The basic formula is

FOV.x = 2 * atan(SENSOR_INFO_PHYSICAL_SIZE.x / (2 * LENS_FOCAL_LENGTH))
FOV.y = 2 * atan(SENSOR_INFO_PHYSICAL_SIZE.y / (2 * LENS_FOCAL_LENGTH))

This is an approximation assuming an ideal lens, etc, but generally good enough.

This calculates the FOV for the entire sensor pixel array.

However, the actual field of view of a given output will be smaller; first, the readout area of the sensor is often smaller than the full pixel array, so instead of using PHYSICAL_SIZE directly, you need to first scale it by the ratio of the pixel array pixel count to the active array pixel count (SENSOR_INFO_ACTIVE_ARRAY_SIZE / SENSOR_INFO_PIXEL_ARRAY_SIZE).

Then, the field of view depends on the aspect ratio of the output(s) you've configured (a 16:9 FOV will be different than a 4:3 FOV), relative to the aspect ratio of the active array, and the aspect ratio of the crop region (digital zoom) if it's smaller than than the full active array.

Each output buffer will be the result of minimally further cropping the cropRegion for the corresponding capture request to reach the correct output aspect ratio. (http://source.android.com/devices/camera/camera3_crop_reprocess.html has diagrams).

So let's say we have a sensor that has a pixel array of (120,120), and we have an active array rectangle of (10,10)-(110,110), so width/height of 100,100.

We configure two outputs, output A is (40,30), output B is (50, 50). Let's leave the crop region at the maximum (0,0)-(100,100).

The horizontal FOV for output A and B will be the same, because the maximum-area crop will result in both outputs using the full active array width:

output_physical_width = SENSOR_INFO_PHYSICAL_SIZE.x * ACTIVE_ARRAY.w / PIXEL_ARRAY.w
FOV_x = 2 * atan(output_physical_width / (2 * LENS_FOCAL_LENGTH))

However, the vertical FOVs will differ - output A will only use 3/4 of the vertical space due to the aspect ratio mismatch:

active_array_aspect = ACTIVE_ARRAY.w / ACTIVE_ARRAY.h
output_a_aspect = output_a.w / output_a.h
output_b_aspect = output_b.w / output_b.h
output_a_physical_height = SENSOR_INFO_PHYSICAL_SIZE.y * ACTIVE_ARRAY.h / PIXEL_ARRAY.h * output_a_aspect / active_array_aspect
output_b_physical_height = SENSOR_INFO_PHYSICAL_SIZE.y * ACTIVE_ARRAY.h / PIXEL_ARRAY.h * output_b_aspect / active_array_aspect
FOV_a_y = 2 * atan(output_a_physical_height / (2 * LENS_FOCAL_LENGTH))
FOV_b_y = 2 * atan(output_b_physical_height / (2 * LENS_FOCAL_LENGTH))

The above works when the output aspect ratio is <= active array aspect ratio (letterboxing); if that's not true, then the output horizontal dimension is reduced and the vertical dimension covers the whole active array (pillarboxing). The scale factor for the horizontal direction is then active_array_aspect/output_aspect.

If you want to calculate the FOV for a zoomed-in view, then substitute the crop region dimensions/aspect ratio for the active array dimensions/aspect ratio.

like image 88
Eddy Talvala Avatar answered Oct 06 '22 01:10

Eddy Talvala