Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculate surface normals from depth image using neighboring pixels cross product

As the title says I want to calculate the surface normals of a given depth image by using the cross product of neighboring pixels. I would like to use Opencv for that and avoid using PCL however, I do not really understand the procedure, since my knowledge is quite limited in the subject. Therefore, I would be grateful is someone could provide some hints. To mention here that I do not have any other information except the depth image and the corresponding rgb image, so no K camera matrix information.

Thus, lets say that we have the following depth image:

enter image description here

and I want to find the normal vector at a corresponding point with a corresponding depth value like in the following image:

enter image description here

How can I do that using the cross product of the neighbouring pixels? I do not mind if the normals are not highly accurate.

Thanks.


Update:

Ok, I was trying to follow @timday's answer and port his code to Opencv. With the following code:

Mat depth = <my_depth_image> of type CV_32FC1 Mat normals(depth.size(), CV_32FC3);  for(int x = 0; x < depth.rows; ++x) {     for(int y = 0; y < depth.cols; ++y)     {          float dzdx = (depth.at<float>(x+1, y) - depth.at<float>(x-1, y)) / 2.0;         float dzdy = (depth.at<float>(x, y+1) - depth.at<float>(x, y-1)) / 2.0;          Vec3f d(-dzdx, -dzdy, 1.0f);         Vec3f n = normalize(d);          normals.at<Vec3f>(x, y) = n;     } }  imshow("depth", depth / 255); imshow("normals", normals); 

I am getting the correct following result (I had to replace double with float and Vecd to Vecf, I do not know why that would make any difference though):

enter image description here

like image 434
ttsesm Avatar asked Jan 06 '16 22:01

ttsesm


People also ask

How do you calculate surface normal?

A surface normal for a triangle can be calculated by taking the vector cross product of two edges of that triangle. The order of the vertices used in the calculation will affect the direction of the normal (in or out of the face w.r.t. winding).

What is surface normal of an image?

A "surface normal" is just a unit vector that is perpendicular to a surface at a specific spot. All surface normals of a planar polygon are parallel, while the surfaces normals of a sphere are pointing off in all directions.


1 Answers

You don't really need to use the cross product for this, but see below.

Consider your range image is a function z(x,y).

The normal to the surface is in the direction (-dz/dx,-dz/dy,1). (Where by dz/dx I mean the differential: the rate of change of z with x). And then normals are conventionally normalized to unit length.

Incidentally, if you're wondering where that (-dz/dx,-dz/dy,1) comes from... if you take the 2 orthogonal tangent vectors in the plane parellel to the x and y axes, those are (1,0,dzdx) and (0,1,dzdy). The normal is perpendicular to the tangents, so should be (1,0,dzdx)X(0,1,dzdy) - where 'X' is cross-product - which is (-dzdx,-dzdy,1). So there's your cross product derived normal, but there's little need to compute it so explicitly in code when you can just use the resulting expression for the normal directly.

Pseudocode to compute a unit-length normal at (x,y) would be something like

dzdx=(z(x+1,y)-z(x-1,y))/2.0; dzdy=(z(x,y+1)-z(x,y-1))/2.0; direction=(-dzdx,-dzdy,1.0) magnitude=sqrt(direction.x**2 + direction.y**2 + direction.z**2) normal=direction/magnitude 

Depending on what you're trying to do, it might make more sense to replace the NaN values with just some large number.

Using that approach, from your range image, I can get this:

enter image description here

(I'm then using the normal directions calculated to do some simple shading; note the "steppy" appearance due to the range image's quantization; ideally you'd have higher precision than 8-bit for the real range data).

Sorry, not OpenCV or C++ code, but just for completeness: the complete code which produced that image (GLSL embedded in a Qt QML file; can be run with Qt5's qmlscene) is below. The pseudocode above can be found in the fragment shader's main() function:

import QtQuick 2.2  Image {   source: 'range.png'  // The provided image    ShaderEffect {     anchors.fill: parent     blending: false      property real dx: 1.0/parent.width     property real dy: 1.0/parent.height     property variant src: parent      vertexShader: "       uniform highp mat4 qt_Matrix;       attribute highp vec4 qt_Vertex;       attribute highp vec2 qt_MultiTexCoord0;       varying highp vec2 coord;       void main() {         coord=qt_MultiTexCoord0;         gl_Position=qt_Matrix*qt_Vertex;       }"     fragmentShader: "      uniform highp float dx;      uniform highp float dy;      varying highp vec2 coord;      uniform sampler2D src;      void main() {        highp float dzdx=( texture2D(src,coord+vec2(dx,0.0)).x - texture2D(src,coord+vec2(-dx,0.0)).x )/(2.0*dx);        highp float dzdy=( texture2D(src,coord+vec2(0.0,dy)).x - texture2D(src,coord+vec2(0.0,-dy)).x )/(2.0*dy);        highp vec3 d=vec3(-dzdx,-dzdy,1.0);        highp vec3 n=normalize(d);        highp vec3 lightDirection=vec3(1.0,-2.0,3.0);        highp float shading=0.5+0.5*dot(n,normalize(lightDirection));        gl_FragColor=vec4(shading,shading,shading,1.0);      }"   } } 
like image 83
timday Avatar answered Sep 18 '22 00:09

timday