I have a list list_of_arrays
of 3D numpy arrays that I want to pass to a C function with the template
int my_func_c(double **data, int **shape, int n_arrays)
such that
data[i] : pointer to the numpy array values in list_of_arrays[i]
shape[i] : pointer to the shape of the array in list_of_arrays[i] e.g. [2,3,4]
How can I call my_func_c
using a cython interface function?
My first idea was to do something like below (which works) but I feel there is a better way just using numpy arrays without mallocing and freeing.
# my_func_c.pyx
import numpy as np
cimport numpy as np
cimport cython
from libc.stdlib cimport malloc, free
cdef extern from "my_func.c":
double my_func_c(double **data, int **shape, int n_arrays)
def my_func(list list_of_arrays):
cdef int n_arrays = len(list_of_arrays)
cdef double **data = <double **> malloc(n_arrays*sizeof(double *))
cdef int **shape = <int **> malloc(n_arrays*sizeof(int *))
cdef double x;
cdef np.ndarray[double, ndim=3, mode="c"] temp
for i in range(n_arrays):
temp = list_of_arrays[i]
data[i] = &temp[0,0,0]
shape[i] = <int *> malloc(3*sizeof(int))
for j in range(3):
shape[i][j] = list_of_arrays[i].shape[j]
x = my_func_c(data, shape, n_arrays)
# Free memory
for i in range(n_arrays):
free(shape[i])
free(data)
free(shape)
return x
N.B.
To see a working example we can use a very simple function calculating the product of all the arrays in our list.
# my_func.c
double my_func_c(double **data, int **shape, int n_arrays) {
int array_idx, i0, i1, i2;
double prod = 1.0;
// Loop over all arrays
for (array_idx=0; array_idx<n_arrays; array_idx++) {
for (i0=0; i0<shape[array_idx][0]; i0++) {
for (i1=0; i1<shape[array_idx][1]; i1++) {
for (i2=0; i2<shape[array_idx][2]; i2++) {
prod = prod*data[array_idx][i0*shape[array_idx][1]*shape[array_idx][2] + i1*shape[array_idx][2] + i2];
}
}
}
}
return prod;
}
Create the setup.py
file,
# setup.py
from distutils.core import setup
from Cython.Build import cythonize
import numpy as np
setup(
name='my_func',
ext_modules = cythonize("my_func_c.pyx"),
include_dirs=[np.get_include()]
)
Compile
python3 setup.py build_ext --inplace
Finally we can run a simple test
# test.py
import numpy as np
from my_func_c import my_func
a = [1+np.random.rand(3,1,2), 1+np.random.rand(4,5,2), 1+np.random.rand(1,2,3)]
print('Numpy product: {}'.format(np.prod([i.prod() for i in a])))
print('my_func product: {}'.format(my_func(a)))
using
python3 test.py
One alternative would be to let numpy manage your memory for you. You can do this by using numpy arrays of np.uintp
which is an unsigned int with the same size as any pointer.
Unfortunately, this does require some type-casting (between "pointer sized int" and pointers) which is a good way of hiding logic errors, so I'm not 100% happy with it.
def my_func(list list_of_arrays):
cdef int n_arrays = len(list_of_arrays)
cdef np.uintp_t[::1] data = np.array((n_arrays,),dtype=np.uintp)
cdef np.uintp_t[::1] shape = np.array((n_arrays,),dtype=np.uintp)
cdef double x;
cdef np.ndarray[double, ndim=3, mode="c"] temp
for i in range(n_arrays):
temp = list_of_arrays[i]
data[i] = <np.uintp_t>&temp[0,0,0]
shape[i] = <np.uintp_t>&(temp.shape[0])
x = my_func_c(<double**>(&data[0]), <np.intp_t**>&shape[0], n_arrays)
(I should point out that I've only confirmed that it compiles and not tested it further, but the basic idea should be OK)
The way you've done it is probably a pretty sensible way. One slight simplification to your original code that should work
shape[i] = <np.uintp_t>&(temp.shape[0])
instead of malloc
and copy. I'd also recommend putting the free
s in a finally
block to ensure they get run.
Edit: @ead has helpfully pointed out that the numpy shape is stored as as np.intp_t
- i.e. an signed integer big enough to fit a pointer in, which is mostly 64bit - while int
is usually 32 bit. Therefore, to pass the shape without copying you'd need to change your C api. Casting help makes that mistake harder to spot ("a good way of hiding logic errors")
I think this is a good pattern to consume C-functionality from C++-code, and it can be also used here and would have two advantages:
To solve your problems you could use std::vector
:
import numpy as np
cimport numpy as np
from libcpp.vector cimport vector
cdef extern from "my_func.c":
double my_func_c(double **data, int **shape, int n_arrays)
def my_func(list list_of_arrays):
cdef int n_arrays = len(list_of_arrays)
cdef vector[double *] data
cdef vector [vector[int]] shape_mem # for storing casted shapes
cdef vector[int *] shape #pointers to stored shapes
cdef double x
cdef np.ndarray[double, ndim=3, mode="c"] temp
shape_mem.resize(n_arrays)
for i in range(n_arrays):
print "i:", i
temp = list_of_arrays[i]
data.push_back(&temp[0,0,0])
for j in range(3):
shape_mem[i].push_back(temp.shape[j])
shape.push_back(shape_mem[i].data())
x = my_func_c(data.data(), shape.data(), n_arrays)
return x
Also your setup would need a modification:
# setup.py
from distutils.core import setup, Extension
from Cython.Build import cythonize
import numpy as np
setup(ext_modules=cythonize(Extension(
name='my_func_c',
language='c++',
extra_compile_args=['-std=c++11'],
sources = ["my_func_c.pyx", "my_func.c"],
include_dirs=[np.get_include()]
)))
I prefer to use std::vector.data()
over &data[0]
because the second would mean undefined behavior for empty data
, and that is the reason we need std=c++11
flag.
But in the end, it is for you to decide, which trade-off to make: the additional complexity of C++ (it has it own pitfalls) vs. handmade memory management vs. letting go of type safety for a short moment.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With