I get images from a camera where it is not possible to take a chessboard picture and calculate the correction matrix using OpenCV. Up to now I corrected the images using imagemagick convert with the option '-distort Barrel "0.0 0.0 -0.035 1.1"' where I got the parameters with trial and error.
Now I want to do this inside OpenCV but all I find in the web is the automatic correction using the chessboard image. Is there any chance to apply some simple manual trial and error lens distortion correction as I did with imagemagick?
Ok, I think I got it. In the matrices cam1, cam2 the image centers were missing (see documentation). I added it and changed the focal length to avoid a too strong change of image size. Here is the code:
import numpy as np
import cv2
src = cv2.imread("distortedImage.jpg")
width = src.shape[1]
height = src.shape[0]
distCoeff = np.zeros((4,1),np.float64)
# TODO: add your coefficients here!
k1 = -1.0e-5; # negative to remove barrel distortion
k2 = 0.0;
p1 = 0.0;
p2 = 0.0;
distCoeff[0,0] = k1;
distCoeff[1,0] = k2;
distCoeff[2,0] = p1;
distCoeff[3,0] = p2;
# assume unit matrix for camera
cam = np.eye(3,dtype=np.float32)
cam[0,2] = width/2.0 # define center x
cam[1,2] = height/2.0 # define center y
cam[0,0] = 10. # define focal length x
cam[1,1] = 10. # define focal length y
# here the undistortion will be computed
dst = cv2.undistort(src,cam,distCoeff)
cv2.imshow('dst',dst)
cv2.waitKey(0)
cv2.destroyAllWindows()
Thank you very much for your assistence.
Here is a method that will undistort an image if you have no chessboard pattern but you know the distortion coefficients.
Since I dont know to which coefficients your barrel distortion parameters correspond (maybe have a look at http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html and http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#initundistortrectifymap you will have to try it out or maybe someone else can help here.
Another point is that I'm not sure whether openCV will handle both, float and double automatically. If that's not the case there might be a bug in this code (I don't know whether double or single precision is assumed):
cv::Mat distCoeff;
distCoeff = cv::Mat::zeros(8,1,CV_64FC1);
// indices: k1, k2, p1, p2, k3, k4, k5, k6
// TODO: add your coefficients here!
double k1 = 0;
double k2 = 0;
double p1 = 0;
double p2 = 0;
double k3 = 0;
double k4 = 0;
double k5 = 0;
double k6 = 0;
distCoeff.at<double>(0,0) = k1;
distCoeff.at<double>(1,0) = k2;
distCoeff.at<double>(2,0) = p1;
distCoeff.at<double>(3,0) = p2;
distCoeff.at<double>(4,0) = k3;
distCoeff.at<double>(5,0) = k4;
distCoeff.at<double>(6,0) = k5;
distCoeff.at<double>(7,0) = k6;
// assume unit matrix for camera, so no movement
cv::Mat cam1,cam2;
cam1 = cv::Mat::eye(3,3,CV_32FC1);
cam2 = cv::Mat::eye(3,3,CV_32FC1);
//cam2.at<float>(0,2) = 100; // for testing a translation
// here the undistortion will be computed
cv::Mat map1, map2;
cv::initUndistortRectifyMap(cam1, distCoeff, cv::Mat(), cam2, input.size(), CV_32FC1, map1, map2);
cv::Mat distCorrected;
cv::remap(input, distCorrected, map1, map2, cv::INTER_LINEAR);
This is a complementary function to undistort, may be faster or better ways to do it but it works:
void distort(const cv::Mat& src, cv::Mat& dst, const cv::Mat& cameraMatrix, const cv::Mat& distCoeffs)
{
cv::Mat distort_x = cv::Mat(src.size(), CV_32F);
cv::Mat distort_y = cv::Mat(src.size(), CV_32F);
cv::Mat pixel_locations_src = cv::Mat(src.size(), CV_32FC2);
for (int i = 0; i < src.size().height; i++) {
for (int j = 0; j < src.size().width; j++) {
pixel_locations_src.at<cv::Point2f>(i,j) = cv::Point2f(j,i);
}
}
cv::Mat fractional_locations_dst = cv::Mat(src.size(), CV_32FC2);
cv::undistortPoints(pixel_locations_src, pixel_locations_dst, cameraMatrix, distCoeffs);
cv::Mat pixel_locations_dst = cv::Mat(src.size(), CV_32FC2);
const float fx = cameraMatrix.at<double>(0,0);
const float fy = cameraMatrix.at<double>(1,1);
const float cx = cameraMatrix.at<double>(0,2);
const float cy = cameraMatrix.at<double>(1,2);
// is there a faster way to do this?
for (int i = 0; i < fractional_locations_dst.size().height; i++) {
for (int j = 0; j < fractional_locations_dst.size().width; j++) {
const float x = fractional_locations_dst.at<cv::Point2f>(i,j).x*fx + cx;
const float y = fractional_locations_dst.at<cv::Point2f>(i,j).y*fy + cy;
pixel_locations_dst.at<cv::Point2f>(i,j) = cv::Point2f(x,y);
}
}
cv::remap(src, dst, pixel_locations_dst, cv::Mat(), CV_INTER_LINEAR);
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With