Consider the following straightforward python extension. When start()-ed
, Foo
will just add the next sequential integer to a py::list
, once a second:
#include <boost/python.hpp>
#include <thread>
#include <atomic>
namespace py = boost::python;
struct Foo {
Foo() : running(false) { }
~Foo() { stop(); }
void start() {
running = true;
thread = std::thread([this]{
while(running) {
std::cout << py::len(messages) << std::end;
messages.append(py::len(messages));
std::this_thread::sleep_for(std::chrono::seconds(1));
}
});
}
void stop() {
if (running) {
running = false;
thread.join();
}
}
std::thread thread;
py::list messages;
std::atomic<bool> running;
};
BOOST_PYTHON_MODULE(Foo)
{
PyEval_InitThreads();
py::class_<Foo, boost::noncopyable>("Foo",
py::init<>())
.def("start", &Foo::start)
.def("stop", &Foo::stop)
;
}
Given the above, the following simple python script segfaults all the time, never even printing anything:
>>> import Foo
>>> f = Foo.Foo()
>>> f.start()
>>> Segmentation fault (core dumped)
With the core pointing to:
namespace boost { namespace python {
inline ssize_t len(object const& obj)
{
ssize_t result = PyObject_Length(obj.ptr());
if (PyErr_Occurred()) throw_error_already_set(); // <==
return result;
}
}} // namespace boost::python
Where:
(gdb) inspect obj
$1 = (const boost::python::api::object &) @0x62d368: {<boost::python::api::object_base> = {<boost::python::api::object_operators<boost::python::api::object>> = {<boost::python::def_visitor<boost::python::api::object>> = {<No data fields>}, <No data fields>}, m_ptr = []}, <No data fields>}
(gdb) inspect obj.ptr()
$2 = []
(gdb) inspect result
$3 = 0
Why does this fail when run in a thread? obj
looks fine, result
gets set correctly. Why does PyErr_Occurred()
happen? Who sets that?
In short, there is a mutex around the CPython interpreter known as the Global Interpreter Lock (GIL). This mutex prevents parallel operations to be performed on Python objects. Thus, at any point in time, a max of one thread, the one that has acquired the GIL, is allowed to perform operations on Python objects. When multiple threads are present, invoking Python code whilst not holding the GIL results in undefined behavior.
C or C++ threads are sometimes referred to as alien threads in the Python documentation. The Python interpreter has no ability to control the alien thread. Therefore, alien threads are responsible for managing the GIL to permit concurrent or parallel execution with Python threads. With this in mind, lets examine the original code:
while (running) {
std::cout << py::len(messages) << std::endl; // Python
messages.append(py::len(messages)); // Python
std::this_thread::sleep_for(std::chrono::seconds(1)); // No Python
}
As noted above, only two of the three lines in the thread body need to run whilst the thread owns the GIL. One common way to handle this is to use an RAII classes to help manage the GIL. For example, with the following gil_lock
class, when a gil_lock
object is created, the calling thread will acquire the GIL. When the gil_lock
object is destructed, it releases the GIL.
/// @brief RAII class used to lock and unlock the GIL.
class gil_lock
{
public:
gil_lock() { state_ = PyGILState_Ensure(); }
~gil_lock() { PyGILState_Release(state_); }
private:
PyGILState_STATE state_;
};
The thread body can then use explicit scope to control the lifetime of the lock.
while (running) {
// Acquire GIL while invoking Python code.
{
gil_lock lock;
std::cout << py::len(messages) << std::endl;
messages.append(py::len(messages));
}
// Release GIL, allowing other threads to run Python code while
// this thread sleeps.
std::this_thread::sleep_for(std::chrono::seconds(1));
}
Here is a complete example based on the original code that demonstrates the program working properly once the GIL is explicitly managed:
#include <thread>
#include <atomic>
#include <iostream>
#include <boost/python.hpp>
/// @brief RAII class used to lock and unlock the GIL.
class gil_lock
{
public:
gil_lock() { state_ = PyGILState_Ensure(); }
~gil_lock() { PyGILState_Release(state_); }
private:
PyGILState_STATE state_;
};
struct foo
{
foo() : running(false) {}
~foo() { stop(); }
void start()
{
namespace python = boost::python;
running = true;
thread = std::thread([this]
{
while (running)
{
{
gil_lock lock; // Acquire GIL.
std::cout << python::len(messages) << std::endl;
messages.append(python::len(messages));
} // Release GIL.
std::this_thread::sleep_for(std::chrono::seconds(1));
}
});
}
void stop()
{
if (running)
{
running = false;
thread.join();
}
}
std::thread thread;
boost::python::list messages;
std::atomic<bool> running;
};
BOOST_PYTHON_MODULE(example)
{
// Force the GIL to be created and initialized. The current caller will
// own the GIL.
PyEval_InitThreads();
namespace python = boost::python;
python::class_<foo, boost::noncopyable>("Foo", python::init<>())
.def("start", &foo::start)
.def("stop", &foo::stop)
;
}
Interactive usage:
>>> import example
>>> import time
>>> foo = example.Foo()
>>> foo.start()
>>> time.sleep(3)
0
1
2
>>> foo.stop()
>>>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With