I'm trying to write a python wrapper for some C++ code that make use of OpenCV but I'm having difficulties returning the result, which is a OpenCV C++ Mat object, to the python interpreter.
I've looked at OpenCV's source and found the file cv2.cpp which has conversions functions to perform conversions to and fro between PyObject* and OpenCV's Mat. I made use of those conversions functions but got a segmentation fault when I tried to use them.
I basically need some suggestions/sample code/online references on how to interface python and C++ code that make use of OpenCV, specifically with the ability to return OpenCV's C++ Mat to the python interpreter or perhaps suggestions on how/where to start investigating the cause of the segmentation fault.
Currently I'm using Boost Python to wrap the code.
Thanks in advance to any replies.
The relevant code:
// This is the function that is giving the segmentation fault. PyObject* ABC::doSomething(PyObject* image) { Mat m; pyopencv_to(image, m); // This line gives segmentation fault. // Some code to create cppObj from CPP library that uses OpenCV cv::Mat processedImage = cppObj->align(m); return pyopencv_from(processedImage); }
The conversion functions taken from OpenCV's source follows. The conversion code gives segmentation fault at the commented line with "if (!PyArray_Check(o)) ...".
static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) // Segmentation fault inside PyArray_Check(o) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); }
My python test program:
import pysomemodule # My python wrapped library. import cv2 def main(): myobj = pysomemodule.ABC("faces.train") # Create python object. This works. image = cv2.imread('61.jpg') processedImage = myobj.doSomething(image) cv2.imshow("test", processedImage) cv2.waitKey() if __name__ == "__main__": main()
"Bindings" are implemented either as a pure Python library using ctypes or as a dynamic-link library using Python/C API. The second option is sometimes used with tools like SWIG which make the task easier by taking care of generating the "boiler-plate" code or Boost.
In general, already-written C code will require no modifications to be used by Python. The only work we need to do to integrate C code in Python is on Python's side. The steps for interfacing Python with C using Ctypes.
Python-OpenCV is just a wrapper around the original C/C++ code. It is normally used for combining best features of both the languages, Performance of C/C++ & Simplicity of Python. So when you call a function in OpenCV from Python, what actually run is underlying C/C++ source.
I solved the problem so I thought I'll share it here with others who may have the same problem.
Basically, to get rid of the segmentation fault, I need to call numpy's import_array() function.
The "high level" view for running C++ code from python is this:
Suppose you have a function foo(arg)
in python that is a binding for some C++ function. When you call foo(myObj)
, there must be some code to convert the python object "myObj" to a form your C++ code can act on. This code is generally semi-automatically created using tools such as SWIG or Boost::Python. (I use Boost::Python in the examples below.)
Now, foo(arg)
is a python binding for some C++ function. This C++ function will receive a generic PyObject
pointer as an argument. You will need to have C++ code to convert this PyObject
pointer to an "equivalent" C++ object. In my case, my python code passes a OpenCV numpy array for a OpenCV image as an argument to the function. The "equivalent" form in C++ is a OpenCV C++ Mat object. OpenCV provides some code in cv2.cpp (reproduced below) to convert the PyObject
pointer (representing the numpy array) to a C++ Mat. Simpler data types such as int and string do not need the user to write these conversion functions as they are automatically converted by Boost::Python.
After the PyObject
pointer is converted to a suitable C++ form, C++ code can act on it. When data has to be returned from C++ to python, an analogous situation arises where C++ code is needed to convert the C++ representation of the data to some form of PyObject
. Boost::Python will take care of the rest in converting the PyObject
to a corresponding python form. When foo(arg)
returns the result in python, it is in a form usable by python. That's it.
The code below shows how to wrap a C++ class "ABC" and expose its method "doSomething" that takes in a numpy array (for an image) from python, convert it to OpenCV's C++ Mat, do some processing, convert the result to PyObject *, and return it to the python interpreter. You can expose as many functions/method you wish (see comments in the code below).
abc.hpp:
#ifndef ABC_HPP #define ABC_HPP #include <Python.h> #include <string> class ABC { // Other declarations ABC(); ABC(const std::string& someConfigFile); virtual ~ABC(); PyObject* doSomething(PyObject* image); // We want our python code to be able to call this function to do some processing using OpenCV and return the result. // Other declarations }; #endif
abc.cpp:
#include "abc.hpp" #include "my_cpp_library.h" // This is what we want to make available in python. It uses OpenCV to perform some processing. #include "numpy/ndarrayobject.h" #include "opencv2/core/core.hpp" // The following conversion functions are taken from OpenCV's cv2.cpp file inside modules/python/src2 folder. static PyObject* opencv_error = 0; static int failmsg(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } class PyAllowThreads { public: PyAllowThreads() : _state(PyEval_SaveThread()) {} ~PyAllowThreads() { PyEval_RestoreThread(_state); } private: PyThreadState* _state; }; class PyEnsureGIL { public: PyEnsureGIL() : _state(PyGILState_Ensure()) {} ~PyEnsureGIL() { PyGILState_Release(_state); } private: PyGILState_STATE _state; }; #define ERRWRAP2(expr) \ try \ { \ PyAllowThreads allowThreads; \ expr; \ } \ catch (const cv::Exception &e) \ { \ PyErr_SetString(opencv_error, e.what()); \ return 0; \ } using namespace cv; static PyObject* failmsgp(const char *fmt, ...) { char str[1000]; va_list ap; va_start(ap, fmt); vsnprintf(str, sizeof(str), fmt, ap); va_end(ap); PyErr_SetString(PyExc_TypeError, str); return 0; } static size_t REFCOUNT_OFFSET = (size_t)&(((PyObject*)0)->ob_refcnt) + (0x12345678 != *(const size_t*)"\x78\x56\x34\x12\0\0\0\0\0")*sizeof(int); static inline PyObject* pyObjectFromRefcount(const int* refcount) { return (PyObject*)((size_t)refcount - REFCOUNT_OFFSET); } static inline int* refcountFromPyObject(const PyObject* obj) { return (int*)((size_t)obj + REFCOUNT_OFFSET); } class NumpyAllocator : public MatAllocator { public: NumpyAllocator() {} ~NumpyAllocator() {} void allocate(int dims, const int* sizes, int type, int*& refcount, uchar*& datastart, uchar*& data, size_t* step) { PyEnsureGIL gil; int depth = CV_MAT_DEPTH(type); int cn = CV_MAT_CN(type); const int f = (int)(sizeof(size_t)/8); int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE : depth == CV_16U ? NPY_USHORT : depth == CV_16S ? NPY_SHORT : depth == CV_32S ? NPY_INT : depth == CV_32F ? NPY_FLOAT : depth == CV_64F ? NPY_DOUBLE : f*NPY_ULONGLONG + (f^1)*NPY_UINT; int i; npy_intp _sizes[CV_MAX_DIM+1]; for( i = 0; i < dims; i++ ) { _sizes[i] = sizes[i]; } if( cn > 1 ) { /*if( _sizes[dims-1] == 1 ) _sizes[dims-1] = cn; else*/ _sizes[dims++] = cn; } PyObject* o = PyArray_SimpleNew(dims, _sizes, typenum); if(!o) { CV_Error_(CV_StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims)); } refcount = refcountFromPyObject(o); npy_intp* _strides = PyArray_STRIDES(o); for( i = 0; i < dims - (cn > 1); i++ ) step[i] = (size_t)_strides[i]; datastart = data = (uchar*)PyArray_DATA(o); } void deallocate(int* refcount, uchar*, uchar*) { PyEnsureGIL gil; if( !refcount ) return; PyObject* o = pyObjectFromRefcount(refcount); Py_INCREF(o); Py_DECREF(o); } }; NumpyAllocator g_numpyAllocator; enum { ARG_NONE = 0, ARG_MAT = 1, ARG_SCALAR = 2 }; static int pyopencv_to(const PyObject* o, Mat& m, const char* name = "<unknown>", bool allowND=true) { //NumpyAllocator g_numpyAllocator; if(!o || o == Py_None) { if( !m.data ) m.allocator = &g_numpyAllocator; return true; } if( !PyArray_Check(o) ) { failmsg("%s is not a numpy array", name); return false; } int typenum = PyArray_TYPE(o); int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE ? CV_8S : typenum == NPY_USHORT ? CV_16U : typenum == NPY_SHORT ? CV_16S : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S : typenum == NPY_FLOAT ? CV_32F : typenum == NPY_DOUBLE ? CV_64F : -1; if( type < 0 ) { failmsg("%s data type = %d is not supported", name, typenum); return false; } int ndims = PyArray_NDIM(o); if(ndims >= CV_MAX_DIM) { failmsg("%s dimensionality (=%d) is too high", name, ndims); return false; } int size[CV_MAX_DIM+1]; size_t step[CV_MAX_DIM+1], elemsize = CV_ELEM_SIZE1(type); const npy_intp* _sizes = PyArray_DIMS(o); const npy_intp* _strides = PyArray_STRIDES(o); bool transposed = false; for(int i = 0; i < ndims; i++) { size[i] = (int)_sizes[i]; step[i] = (size_t)_strides[i]; } if( ndims == 0 || step[ndims-1] > elemsize ) { size[ndims] = 1; step[ndims] = elemsize; ndims++; } if( ndims >= 2 && step[0] < step[1] ) { std::swap(size[0], size[1]); std::swap(step[0], step[1]); transposed = true; } if( ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize*size[2] ) { ndims--; type |= CV_MAKETYPE(0, size[2]); } if( ndims > 2 && !allowND ) { failmsg("%s has more than 2 dimensions", name); return false; } m = Mat(ndims, size, type, PyArray_DATA(o), step); if( m.data ) { m.refcount = refcountFromPyObject(o); m.addref(); // protect the original numpy array from deallocation // (since Mat destructor will decrement the reference counter) }; m.allocator = &g_numpyAllocator; if( transposed ) { Mat tmp; tmp.allocator = &g_numpyAllocator; transpose(m, tmp); m = tmp; } return true; } static PyObject* pyopencv_from(const Mat& m) { if( !m.data ) Py_RETURN_NONE; Mat temp, *p = (Mat*)&m; if(!p->refcount || p->allocator != &g_numpyAllocator) { temp.allocator = &g_numpyAllocator; m.copyTo(temp); p = &temp; } p->addref(); return pyObjectFromRefcount(p->refcount); } ABC::ABC() {} ABC::~ABC() {} // Note the import_array() from NumPy must be called else you will experience segmentation faults. ABC::ABC(const std::string &someConfigFile) { // Initialization code. Possibly store someConfigFile etc. import_array(); // This is a function from NumPy that MUST be called. // Do other stuff } // The conversions functions above are taken from OpenCV. The following function is // what we define to access the C++ code we are interested in. PyObject* ABC::doSomething(PyObject* image) { cv::Mat cvImage; pyopencv_to(image, cvImage); // From OpenCV's source MyCPPClass obj; // Some object from the C++ library. cv::Mat processedImage = obj.process(cvImage); return pyopencv_from(processedImage); // From OpenCV's source }
The code to use Boost Python to create the python module. I took this and the following Makefile from http://jayrambhia.wordpress.com/tag/boost/:
pysomemodule.cpp:
#include <string> #include<boost/python.hpp> #include "abc.hpp" using namespace boost::python; BOOST_PYTHON_MODULE(pysomemodule) { class_<ABC>("ABC", init<const std::string &>()) .def(init<const std::string &>()) .def("doSomething", &ABC::doSomething) // doSomething is the method in class ABC you wish to expose. One line for each method (or function depending on how you structure your code). Note: You don't have to expose everything in the library, just the ones you wish to make available to python. ; }
And finally, the Makefile (successfully compiled on Ubuntu but should work elsewhere possibly with minimal adjustments).
PYTHON_VERSION = 2.7 PYTHON_INCLUDE = /usr/include/python$(PYTHON_VERSION) # location of the Boost Python include files and library BOOST_INC = /usr/local/include/boost BOOST_LIB = /usr/local/lib OPENCV_LIB = `pkg-config --libs opencv` OPENCV_CFLAGS = `pkg-config --cflags opencv` MY_CPP_LIB = lib_my_cpp_library.so TARGET = pysomemodule SRC = pysomemodule.cpp abc.cpp OBJ = pysomemodule.o abc.o $(TARGET).so: $(OBJ) g++ -shared $(OBJ) -L$(BOOST_LIB) -lboost_python -L/usr/lib/python$(PYTHON_VERSION)/config -lpython$(PYTHON_VERSION) -o $(TARGET).so $(OPENCV_LIB) $(MY_CPP_LIB) $(OBJ): $(SRC) g++ -I$(PYTHON_INCLUDE) -I$(BOOST_INC) $(OPENCV_CFLAGS) -fPIC -c $(SRC) clean: rm -f $(OBJ) rm -f $(TARGET).so
After you have successfully compiled the library, you should have a file "pysomemodule.so" in the directory. Put this lib file in a place accessible by your python interpreter. You can then import this module and create an instance of the class "ABC" above as follows:
import pysomemodule foo = pysomemodule.ABC("config.txt") # This will create an instance of ABC
Now, given an OpenCV numpy array image, we can call the C++ function using:
processedImage = foo.doSomething(image) # Where the argument "image" is a OpenCV numpy image.
Note that you will need Boost Python, Numpy dev, as well as Python dev library to create the bindings.
The NumPy docs in the following two links are particularly useful in helping one understand the methods that were used in the conversion code and why import_array() must be called. In particular, the official numpy doc is helpful in making sense of OpenCV's python binding code.
http://dsnra.jpl.nasa.gov/software/Python/numpydoc/numpy-13.html http://docs.scipy.org/doc/numpy/user/c-info.how-to-extend.html
Hope this helps.
I hope this helps people looking for a fast and easy way.
Here is the github repo with the open C++ code I wrote for exposing code using OpenCV's Mat class with as little pain as possible.
[Update] This code now works for OpenCV 2.X and OpenCV 3.X. CMake and experimental support for Python 3.X are now also available.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With