Now I'm developing C# app running on Windows. Some of processes are written in Python, that called via pythonnet (Python for .NET). The processes are calculation-heavy, so I want to do them in parallel.
They are CPU-bounded and can be handled independently.
As far as I know, there are 2 possible ways to realize it:
Launch multiple Python runtime
The first way is launching multiple Python interpreters but it seems unfeasible.
Because pythonnet aparently can manage only one interpreter that initialialized by static method, PythonEngine.Initialize().
From the Python.NET documentation:
Important Note for embedders: Python is not free-threaded and uses a global interpreter lock to allow multi-threaded applications to interact safely with the Python interpreter. Much more information about this is available in the Python C-API documentation on the www.python.org Website.
When embedding Python in a managed application, you have to manage the GIL in just the same way you would when embedding Python in a C or C++ application.
Before interacting with any of the objects or APIs provided by the Python.Runtime namespace, calling code must have acquired the Python global interpreter lock by calling the PythonEngine.AcquireLock method. The only exception to this rule is the PythonEngine.Initialize method, which may be called at startup without having acquired the GIL.
Use multiprocessing package in Python
The other way is using multiprocessing package.
According to Python documentation, following statement is necessary if the code runs on Windows to ensure spawn finite process:
if __name__ == "__main__":
However, the function written in Python is taken as a part of module since it's embedded to .NET.
For example, following code is executable, but spawns processes infinitely.
//C#
static void Main(string[] args)
{
using (Py.GIL())
{
PythonEngine.Exec(
"print(__name__)\n" + //output is "buitlins"
"if __name__ == 'builtins':\n" +
" import test_package\n" + //import Python code below
" test_package.async_test()\n"
);
}
}
# Python
import concurrent.futures
def heavy_calc(x):
for i in range(int(1e7) * x):
i*2
def async_test():
# multiprocessing
with concurrent.futures.ProcessPoolExecutor(max_workers=8) as executor:
futures = [executor.submit(heavy_calc,x) for x in range(10)]
(done, notdone) = concurrent.futures.wait(futures)
for future in futures:
print(future.result())
Is there good idea to solve above problem? Any comments would be appreciated. Thanks in advance.
The multiprocessing package offers both local and remote concurrency, effectively side-stepping the Global Interpreter Lock by using subprocesses instead of threads. Due to this, the multiprocessing module allows the programmer to fully leverage multiple processors on a given machine. It runs on both Unix and Windows.
Python multiprocessing Process classAt first, we need to write a function, that will be run by the process. Then, we need to instantiate a process object. If we create a process object, nothing will happen until we tell it to start processing via start() function. Then, the process will run and return its result.
Python's Global Interpreter Lock (GIL) only allows one thread to be run at a time under the interpreter, which means you can't enjoy the performance benefit of multithreading if the Python interpreter is required.
You can join a process pool by calling join() on the pool after calling close() or terminate() in order to wait for all processes in the pool to be shutdown.
For each python call, 1. Create an appDomain 2. Create a task in the appdomain that will run the python asynchronously.
Since it's separate AppDomains, the static methods will be independent.
Creating an using an AppDomain is heavy, so I couldn't do it if the number of calls you have is extremely large, but it sounds like you just might have a small number of processes to run asynchronously.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With