
In the above image ,we can see point which are drawn on image ,by some openCV algorithm.
I want to draw a UIView point on those points ,so that user can crop it.
I am not getting how will I access those points so that i can add uiview points.
I tried to read the cv::Point ,but value are just different(more) to the co-ordinate height and width.
static cv::Mat drawSquares( cv::Mat& image, const std::vector<std::vector<cv::Point> >& squares )
{
    int max_X=0,max_Y=0;
    int min_X=999,min_Y=999;
    for( size_t i = 0; i < squares.size(); i++ )
    {
        const cv::Point* p = &squares[i][0];
        int n = (int)squares[i].size();
        NSLog(@"Squares%d %d %d",n,p->x,p->y);
        polylines(image, &p, &n, 1, true, cv::Scalar(0,255,0), 3, cv::LINE_AA);
    }
    return image;
}
In above code ,drawsquare method draw the squares .I have NSLog the point x, y co-ordinates but these values are not w.r.t to device co-ordinate system.
Can someone help me how it can be achieved Or an alternative to my requirement.
Thanks
This is in Swift 3. In the Swift class that you're returning the cv::Points to:
x and y dimensions of the image you're recording from your camera AV Capture Session
x and y dimension of the UIview you're using to visualize the image by the capture session's image dimensions in the X and Y x and y coordinates by the scaled x and y dimensions{
    let imageScaleX = imgView.bounds.width/(newCameraHelper?.dimensionX)!
    let imageScaleY = imgView.bounds.height/(newCameraHelper?.dimensionY)!
    for point in Squares {
       let x = point.x * imageScaleX
       let y = point.y * imageScaleY
    }
}
                        If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With