Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Memory leak when running python script from C++

The following minimal example of calling a python function from C++ has a memory leak on my system:

script.py:

import tensorflow
def foo(param):
    return "something"

main.cpp:

#include "python3.5/Python.h"

#include <iostream>
#include <string>

int main()
{
    Py_Initialize();

    PyRun_SimpleString("import sys");
    PyRun_SimpleString("if not hasattr(sys,'argv'): sys.argv = ['']");
    PyRun_SimpleString("sys.path.append('./')");

    PyObject* moduleName = PyUnicode_FromString("script");
    PyObject* pModule = PyImport_Import(moduleName);
    PyObject* fooFunc = PyObject_GetAttrString(pModule, "foo");
    PyObject* param = PyUnicode_FromString("dummy");
    PyObject* args = PyTuple_Pack(1, param);
    PyObject* result = PyObject_CallObject(fooFunc, args);

    Py_CLEAR(result);
    Py_CLEAR(args);
    Py_CLEAR(param);
    Py_CLEAR(fooFunc);
    Py_CLEAR(pModule);
    Py_CLEAR(moduleName);

    Py_Finalize();
}

compiled with

g++ -std=c++11 main.cpp $(python3-config --cflags) $(python3-config --ldflags) -o main

and run with valgrind

valgrind --leak-check=yes ./main

produces the following summary

LEAK SUMMARY:
==24155==    definitely lost: 161,840 bytes in 103 blocks
==24155==    indirectly lost: 33 bytes in 2 blocks
==24155==      possibly lost: 184,791 bytes in 132 blocks
==24155==    still reachable: 14,067,324 bytes in 130,118 blocks
==24155==                       of which reachable via heuristic:
==24155==                         stdstring          : 2,273,096 bytes in 43,865 blocks
==24155==         suppressed: 0 bytes in 0 blocks

I'm using Linux Mint 18.2 Sonya, g++ 5.4.0, Python 3.5.2 and TensorFlow 1.4.1.

Removing import tensorflow makes the leak disappear. Is this a bug in TensorFlow or did I do something wrong? (I expect the latter to be true.)


Additionally when I create a Keras layer in Python

#script.py
from keras.layers import Input
def foo(param):
    a = Input(shape=(32,))
    return "str"

and run the call to Python from C++ repeatedly

//main.cpp

#include "python3.5/Python.h"

#include <iostream>
#include <string>

int main()
{
    Py_Initialize();

    PyRun_SimpleString("import sys");
    PyRun_SimpleString("if not hasattr(sys,'argv'): sys.argv = ['']");
    PyRun_SimpleString("sys.path.append('./')");

    PyObject* moduleName = PyUnicode_FromString("script");
    PyObject* pModule = PyImport_Import(moduleName);

    for (int i = 0; i < 10000000; ++i)
    {
        std::cout << i << std::endl;
        PyObject* fooFunc = PyObject_GetAttrString(pModule, "foo");
        PyObject* param = PyUnicode_FromString("dummy");
        PyObject* args = PyTuple_Pack(1, param);
        PyObject* result = PyObject_CallObject(fooFunc, args);

        Py_CLEAR(result);
        Py_CLEAR(args);
        Py_CLEAR(param);
        Py_CLEAR(fooFunc);
    }

    Py_CLEAR(pModule);
    Py_CLEAR(moduleName);

    Py_Finalize();
}

the memory consumption of the application continuously grows ad infinitum during runtime.

So I guess there is something fundamentally wrong with the way I call the python function from C++, but what is it?

like image 338
Tobias Hermann Avatar asked Jan 11 '18 16:01

Tobias Hermann


People also ask

What causes a memory leak in Python?

What causes memory leaks in Python? The Python program, just like other programming languages, experiences memory leaks. Memory leaks in Python happen if the garbage collector doesn't clean and eliminate the unreferenced or unused data from Python.

Why memory leak happens in C?

In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in such a way that memory which is no longer needed is not released. A memory leak may also happen when an object is stored in memory but cannot be accessed by the running code.


1 Answers

There are two different types "memory leaks" in your question.

Valgrind is telling you about the first type of memory leaks. However, it is pretty usual for python modules to "leak" memory - it is mostly some globals which are allocated/initialized when the module is loaded. And because the module is loaded only once in Python its not a big problem.

A well known example is numpy's PyArray_API: It must be initialized via _import_array, is then never deleted and stays in memory until the python interpreter is shut down.

So it is a "memory leak" per design, you can argue whether it is a good design or not, but at the end of the day there is nothing you could do about it.

I don't have enough insight into the tensorflow-module to pin-point the places where such memory leaks happen, but I'm pretty sure that it's nothing you should worry about.


The second "memory leak" is more subtle.

You can get a lead, when you compare the valgrind output for 10^4 and 10^5 iterations of the loop - there will be almost no difference! There is however difference in the peak-memory consumption.

Differently from C++, Python has a garbage collector - so you cannot know when exactly an object is destructed. CPython uses reference counting, so when a reference count gets 0, the object is destroyed. However, when there is a cycle of references (e.g. object A holds a reference of object B and object B holds a reference of object B) it is not so simple: the garbage collector needs to iterate through all objects to find such no longer used cycles.

One could think, that keras.layers.Input has such a cycle somewhere (and this is true), but this is not the reason for this "memory leak", which can be observed also for pure python.

We use objgraph-package to inspect the references, let's run the following python script:

#pure.py
from keras.layers import Input
import gc
import sys
import objgraph


def foo(param):
    a = Input(shape=(1280,))
    return "str"

###  MAIN :

print("Counts at the beginning:")
objgraph.show_most_common_types()
objgraph.show_growth(limit=7) 

for i in range(int(sys.argv[1])):
   foo(" ")

gc.collect()# just to be sure

print("\n\n\n Counts at the end")
objgraph.show_most_common_types()
objgraph.show_growth(limit=7)

import random
objgraph.show_chain(
   objgraph.find_backref_chain(
        random.choice(objgraph.by_type('Tensor')), #take some random tensor
         objgraph.is_proper_module),
    filename='chain.png') 

and run it:

>>> python pure.py 1000

We can see the following: at the end there are exactly 1000 Tersors, that means none of our created objects got disposed!

If we take a look at the chain, which keeps a tensor-object alive (was created with objgraph.show_chain), so we see:

enter image description here

that there is a tensorflow-Graph-object where all tensors are registered and stay there until session is closed.

So far the theory, however neighter:

#close session and free resources:
import keras
keras.backend.get_session().close()#free all resources

print("\n\n\n Counts after session.close():")
objgraph.show_most_common_types()

nor the here proposed solution:

with tf.Graph().as_default(), tf.Session() as sess:
   for step in range(int(sys.argv[1])):
     foo(" ")

has worked for the current tensorflow-version. Which is probably a bug.


In a nutshell: You do nothing wrong in your c++-code, there are no memory leaks you are responsible for. In fact you would see exactly same memory consumption if you would call the function foo from a pure python-script over and over again.

All created Tensors are registered in a Graph-object and aren't automatically released, you must release them by closing the backend session - which however doesn't work due to a bug in the current tensorflow-version 1.4.0.

like image 68
ead Avatar answered Oct 03 '22 08:10

ead