I'm attempting to scale down an image using the Python OpenCV bindings (CV2, the new bindings):
ret, frame = cap.read()
print frame.shape
# prints (720, 1280, 3)
smallsize = (146,260)
smallframe = cv2.resize(frame, smallsize)
print smallframe.shape
# prints (260, 146, 3)
As you can see, the dimensions somehow end up being flipped on the scaled down image. Instead of returning an image with dimensions (WxH) 146x260, I get 260x146.
What gives?
This was answered long ago but never accepted. Let me explain just a little more for anyone else who gets confused by this. The problem is that in python, OpenCV uses numpy. Numpy array shapes, functions, etc. assume (height, width) while OpenCV functions, methods, etc. use (width, height). You just need to pay attention.
cv2.anything()
--> use (width, height)
image.anything()
--> use (height, width)
numpy.anything()
--> use (height, width)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With