I would like to know how would I calculate the stride of a cv:Mat
.
I have updated the code, where I need to calculate the stride, I don't know what's wrong to calculate the projective transformation.
I get a cv::Mat
then copy it to unsigned int
array, then do the transformation on it, then get back a cv::Mat
to be shown.
cv::Mat3b srcIm;
srcIm = imread("15016889798859437.jpg");
cv::Mat3b srcIm, edges;
srcIm = imread("Lenna.png");
image_t src, dst;
int n_bytes_for_each_row = srcIm.step;
src.width = srcIm.rows;
src.height = srcIm.cols;
src.stride = n_bytes_for_each_row;
dst.width = 350;
dst.height = 350;
dst.stride = n_bytes_for_each_row;
dst.pixels = new unsigned int[350*350];
std::unique_ptr<unsigned int[]> videoFrame(new unsigned int[srcIm.rows * srcIm.cols]);
std::transform(srcIm.begin(), srcIm.end(), videoFrame.get()
, [](cv::Vec3b const& v) {
return v[0] | (v[1] << 8) | (v[2] << 16);
});
vertex_t vert[4];
vert[0].u = 0;
vert[0].v = 0;
vert[0].x = 0;
vert[0].y = 0;
vert[1].u = 50;
vert[1].v = 0;
vert[1].x = 350;
vert[1].y = 0;
//
vert[2].u = 150;
vert[2].v = 350;
vert[2].x = 350;
vert[2].y = 350;
//
vert[3].u = 0;
vert[3].v = 50;
vert[3].x = 0;
vert[3].y = 350;
src.pixels = videoFrame.get();
perspective_transform(&src, &dst, vert);
cv::Mat videoFrameMat(350, 350, CV_32S, dst.pixels);
double min;
double max;
cv::minMaxIdx(videoFrameMat, &min, &max);
cv::Mat adjMap;
cv::convertScaleAbs(videoFrameMat, adjMap, 255 / max);
cv::imshow("Out", adjMap);
cv::waitKey();
In OpenCV the main matrix class is called Mat and is contained in the OpenCV-namespace cv. This matrix is not templated but nevertheless can contain different data types. These are indicated by a certain type-number. Additionally, OpenCV provides a templated class called Mat_, which is derived from Mat.
CV_32F defines the depth of each element of the matrix, while. CV_32FC1 defines both the depth of each element and the number of channels.
That is, image of type CV_64FC1 is simple grayscale image and has only 1 channel: image[i, j] = 0.5. while image of type CV_64FC3 is colored image with 3 channels: image[i, j] = (0.5, 0.3, 0.7) (in C++ you can check individual pixels as image.at<double>(i, j) ) CV_64F is the same as CV_64FC1 .
The Mat class of OpenCV library is used to store the values of an image. It represents an n-dimensional array and is used to store image data of grayscale or color images, voxel volumes, vector fields, point clouds, tensors, histograms, etc.
You can use step:
step – Number of bytes each matrix row occupies
int n_bytes_for_each_row = mat.step;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With