I have the following shared object:
MyLib.cpp
#include <iostream>
class MyClass
{
public:
MyClass(){}
void function()
{
std::cout << "hello" << std::endl;
//var = 10;
}
private:
int var;
};
extern "C" {
MyClass* create()
{
return new MyClass();
}
void func(MyClass* myclass)
{
myclass->function();
}
}
That I compile with: g++ -fPIC -shared -o MyLib.so MyLib.cpp
I then use it with the following Python script:
script.py
import ctypes
lib = ctypes.cdll.LoadLibrary("./MyLib.so")
MyClass = lib.create()
lib.func(MyClass)
Like this, it works perfectly, but if I uncomment the line //var = 10;
, Python makes a segmentation fault (Python 3.8). This happens every time the object MyClass
makes a change to one of its local variable (except inside the constructor, where it works). It looks like the address of the variable var
is wrong and when accessing it, there is a segmentation fault. I tried using the keyword "virtual" for function
without any change, and I tried to import the shared object in another C++ program using dlfcn, which worked fine. Any idea what is wrong ?
A segmentation fault (aka segfault) is a common condition that causes programs to crash; they are often associated with a file named core . Segfaults are caused by a program trying to read or write an illegal memory location.
It can be resolved by having a base condition to return from the recursive function. A pointer must point to valid memory before accessing it.
However it could also be an issue with a python extension (if you are using any), which (this other answer) [ stackoverflow.com/a/10035594/25891] suggests how to debug Segmentation fault is a generic one, there are many possible reasons for this: Fetching a huge data set from the db using a query (if the size of fetched data is more than swap mem)
In particular, import matplotlib is known to cause a segmentation fault error on some platforms, which subsequently leads to the crash of the active Python session.
If it's pure Python then it's a bug there and congratulations. If you're using a c module, then the segfault is probably coming from there. it's Pure python. The program runs great on relatively small data set and it made me think that the code is correct. According to the Python documentation:::::: The highest possible limit is platform-dependent.
The reference to crash might mean segmentaion fault on your OS. Try a smaller stack. But IIRC the algorithm you're using puts the rntire SSC on the stack so you may run out of stack. This happens when a python extension (written in C) tries to access a memory beyond reach. You can trace it in following ways.
The pointers are not the same:
extern "C" {
MyClass* create()
{
MyClass* myclass = new MyClass();
std::cerr << "Returning: " << myclass << "\n";
return myclass;
}
void func(MyClass* myclass)
{
std::cerr << "Calling: " << myclass << "\n";
myclass->function();
}
}
Running I get:
Returning: 0x7faab9c06580
Calling: 0xffffffffb9c06580
Looks like somewhere there was a sign extension issue.
You need to tell the python the type of the objects being passed around otherwise it thinks they are int
and nasty things happen:
import ctypes
lib = ctypes.cdll.LoadLibrary("./MyLib.so")
lib.create.restype = ctypes.c_void_p;
lib.func.argtypes = [ ctypes.c_void_p ];
MyClass = lib.create();
lib.func(MyClass)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With