Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenCV and Numpy interacting badly

Can anyone explain why importing cv and numpy would change the behaviour of python's struct.unpack? Here's what I observe:

Python 2.7.3 (default, Aug  1 2012, 05:14:39) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44

This is correct

>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44

Still ok, after importing cv

>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44

And OK after importing cv and then numpy

Now I restart python:

Python 2.7.3 (default, Aug  1 2012, 05:14:39) 
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from struct import pack, unpack
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44
>>> import numpy
>>> unpack("f",pack("I",31))[0]
4.344025239406933e-44

So far so good, but now I import cv AFTER importing numpy:

>>> import cv
libdc1394 error: Failed to initialize libdc1394
>>> unpack("f",pack("I",31))[0]
0.0

I've repeated this a number of times, including on multiple servers, and it always goes the same way. I've also tried it with struct.unpack and struct.pack, which also makes no difference.

I can't understand how importing numpy and cv could have any impact at all on the output of struct.unpack (pack remains the same, btw).

The "libdc1394" thing is, I believe, a red-herring: ctypes error: libdc1394 error: Failed to initialize libdc1394

Any ideas?

tl;dr: importing numpy and then opencv changes the behaviour of struct.unpack.

UPDATE: Paulo's answer below shows that this is reproducible. Seborg's comment suggests that it's something to do with the way python handles subnormals, which sounds plausible. I looked into Contexts but that didn't seem to be the problem, as the context was the same after the imports as it had been before them.

like image 257
Ben Avatar asked Aug 28 '13 15:08

Ben


1 Answers

This isn't an answer, but it's too big for a comment. I played with the values a bit to find the limits.

Without loading numpy and cv:

>>> unpack("f", pack("i", 8388608))
(1.1754943508222875e-38,)
>>> unpack("f", pack("i", 8388607))
(1.1754942106924411e-38,)

After loading numpy and cv, the first line is the same, but the second:

>>> unpack("f", pack("i", 8388607))
(0.0,)

You'll notice that the first result is the lower limit for 32 bit floats. I then tried the same with d.

Without loading the libraries:

>>> unpack("d", pack("xi", 1048576))
(2.2250738585072014e-308,)
>>> unpack("d", pack("xi", 1048575))
(2.2250717365114104e-308,)

And after loading the libraries:

>>> unpack("d",pack("xi", 1048575))
(0.0,)

Now the first result is the lower limit for 64 bit float precision.

It seems that for some reason, loading the numpy and cv libraries, in that order, constrains unpack to use 32 and 64 bit precision and return 0 for lower values.

like image 159
Paulo Almeida Avatar answered Oct 06 '22 01:10

Paulo Almeida