Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

python-pyramid app memory is not releasing at all

How to solve this memory leak?

What measures should I take to cleanup old session objects? Isn't session.close() is sufficient?

or

Is it something to do with pyramid?

Sqlalchmey setup:
----------------------------------------------------------------------------------
def get_db(request):
    maker = request.registry.dbmaker
    session = maker()

    @profile
    def cleanup(request):
        _session = request.db
        if request.exception is not None:
            _session.rollback()
        else:
            _session.commit()
        _session.close()
        # del _session     # No memory released

    request.add_finished_callback(cleanup)
    return session

def main(global_config, **settings):
    :
    :
    config.registry.dbmaker = sessionmaker(bind=engine)
    config.add_request_method(get_db, name='db', reify=True)
    :
    :

Pyramid app request handler is like

@view_config(route_name='list_employees', renderer='json')
def employees(request):
   session = request.db
   office = session.query(Office).get(1)
   employees = [x.name for x in office.employees]
   return employees

Now the problem is, In every request to list_employees the memory is growing. the size of increase in memory is almost equal to size of office.employees.

Debug:

request 1 starts with memory utilization = 10MB
request 1 ends with memory utilization = 18MB

request 2 starts with memory utilization = 18MB
request 2 ends with memory utilization = 26MB        

request 3 starts with memory utilization = 26MB
request 3 ends with memory utilization = 34MB        
                 :
                 :
           Grows eventually

employees = [x.name for x in office.employees]
This is the line where about 8-10MB memory utilized

To debug, I added __del__ method in Employ and Office models, looks like they are deleting.

Also tried session.expunge(office), del office and gc.collect()

I am debugging memory consumption using https://pypi.python.org/pypi/memory_profiler Also I am using https://pypi.python.org/pypi/transaction is other requests.

Not using debug pyramid toolbar.

EDIT : Found memory increase at this line (employees = [x.name for x in office.employees]) shows zero after 6-7 requests. But query was returning same number of rows.

EDIT : Added standalone app https://github.com/Narengowda/pyramid_sqlalchemy_app

EDIT: ITS NOT RELATED TO SQLALCHEMY AT ALL(my bad). Wrote a simple view function which doesn't have any sqlalchmey queries.

class Test(object):

    def __init__(self):
        self.x = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000
        self.y = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000
        self.z = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000
        self.i = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000
        self.v = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000
        self.o = 'sdfklhasdjkfhasklsdkjflksdfksd' *1000


@view_config(route_name='home', renderer='json')
def my_view(request):
    return test(request)

@profile
def test(request):
    count = request.GET.get('count')
    l = [Test() for i in range(int(count))]
    print l[0]
    return {}

I am able to see this, below are the logs of requests

REQUEST: 1

Line # Mem usage Increment Line Contents


23     37.3 MiB      0.0 MiB   @profile
24                             def test(request):
25     37.3 MiB      0.0 MiB       count = request.GET.get('count')
26    112.4 MiB     75.1 MiB       l = [Test() for i in range(int(count))]
27    112.4 MiB      0.0 MiB       print l[0]
28    112.4 MiB      0.0 MiB       return {}

REQUEST: 2

Line # Mem usage Increment Line Contents


23    111.7 MiB      0.0 MiB   @profile
24                             def test(request):
25    111.7 MiB      0.0 MiB       count = request.GET.get('count')
26    187.3 MiB     75.6 MiB       l = [Test() for i in range(int(count))]
27    187.3 MiB      0.0 MiB       print l[0]
28    187.3 MiB      0.0 MiB       return {}

REQUEST: 3

Line # Mem usage Increment Line Contents


23    184.3 MiB      0.0 MiB   @profile
24                             def test(request):
25    184.3 MiB      0.0 MiB       count = request.GET.get('count')
26    259.7 MiB     75.4 MiB       l = [Test() for i in range(int(count))]
27    259.7 MiB      0.0 MiB       print l[0]
28    259.7 MiB      0.0 MiB       return {}

REQUEST: 4

Line # Mem usage Increment Line Contents


23    255.1 MiB      0.0 MiB   @profile
24                             def test(request):
25    255.1 MiB      0.0 MiB       count = request.GET.get('count')
26    330.4 MiB     75.3 MiB       l = [Test() for i in range(int(count))]
27    330.4 MiB      0.0 MiB       print l[0]
28    330.4 MiB      0.0 MiB       return {}

REQUEST: 5

Line # Mem usage Increment Line Contents


23    328.2 MiB      0.0 MiB   @profile
24                             def test(request):
25    328.2 MiB      0.0 MiB       count = request.GET.get('count')
26    330.5 MiB      2.3 MiB       l = [Test() for i in range(int(count))]
27    330.5 MiB      0.0 MiB       print l[0]
28    330.5 MiB      0.0 MiB       return {}

REQUEST: 6

Line # Mem usage Increment Line Contents


23    330.5 MiB      0.0 MiB   @profile
24                             def test(request):
25    330.5 MiB      0.0 MiB       count = request.GET.get('count')
26    330.5 MiB      0.0 MiB       l = [Test() for i in range(int(count))]
27    330.5 MiB      0.0 MiB       print l[0]
28    330.5 MiB      0.0 MiB       return {}

I have tried many times with different count query param, saw increase in memory utilization stops after exactly 5 requests(magic).

Also I tried to print all the objects and compared there addresses what I observed is take a look at logs of request 4 and 5. Looks like GC happened, hence memory reduced from 330.4 Mi to 328.2 MiB But you wont see 75.3 MiB Memory utilization to create new objects(line 26) but you can see just 2.3 MiB increase. Later I verified address of the objects created in last two requests, found 80% address of the objects from last two requests are same

REQUEST: 4 objects addresses

<pyramid_sqa.views.Test object at 0x3a042d0>
<pyramid_sqa.views.Test object at 0x3a04310>
<pyramid_sqa.views.Test object at 0x3a04350>
<pyramid_sqa.views.Test object at 0x3a04390>
<pyramid_sqa.views.Test object at 0x3a043d0>
<pyramid_sqa.views.Test object at 0x3a04410>
<pyramid_sqa.views.Test object at 0x3a04450>
<pyramid_sqa.views.Test object at 0x3a04490>
<pyramid_sqa.views.Test object at 0x3a044d0>
<pyramid_sqa.views.Test object at 0x3a04510>

REQUEST: 5 objects addresses

<pyramid_sqa.views.Test object at 0x3a04390>
<pyramid_sqa.views.Test object at 0x3a043d0>
<pyramid_sqa.views.Test object at 0x3a04410>
<pyramid_sqa.views.Test object at 0x3a04450>
<pyramid_sqa.views.Test object at 0x3a04490>
<pyramid_sqa.views.Test object at 0x3a044d0>
<pyramid_sqa.views.Test object at 0x3a04290>
<pyramid_sqa.views.Test object at 0x3a04550>
<pyramid_sqa.views.Test object at 0x3a04590>
<pyramid_sqa.views.Test object at 0x3a045d0>

so new objects are created and python is reusing memory(reusing object!!?)

Is it OK if my server memory shoots up memory like this?

like image 346
naren Avatar asked Aug 08 '14 09:08

naren


People also ask

How to release memory from a Python program?

In your Python code, add at the begin of the file, the following: After using the "Big" variable (for example: myBigVar) for which, you would like to release memory, write in your python code the following: In another terminal, run your python code and observe in the "glances" terminal, how the memory is managed in your system! Good luck!

Why am I getting memory error in Python?

If you get an unexpected Python Memory Error and you think you should have plenty of rams available, it might be because you are using a 32-bit python installation. Your program is running out of virtual address space. Most probably because you’re using a 32-bit version of Python.

Why does Python use so much RAM?

Python uses garbage collection and built-in memory management to ensure the program only uses as much RAM as required. So unless you expressly write your program in such a way to bloat the memory usage, e.g. making a database in RAM, Python only uses what it needs. Which begs the question, why would you want to use more RAM?

How to debug memory leaks in Python?

I found memory-profiler and filprofiler to be helpful tools for debugging memory leaks in Python. I also want to emphasize that circular references in Python can increase the memory footprint of your applications. The garbage collector will eventually free the memory but, as we saw in this case, maybe not until it’s too late!


1 Answers

Python does its own memory management for Python objects, and even when the CPython GC frees a Python object, it still will not release the memory to the operating system (as malloc()/free() might do). When the GC frees a Python object, the memory can then be used for new Python objects. This is the effect you see when the memory consumption does not increase for request number 6. After number 5, the GC freed the deleted objects and the new object in request number 6 could use the freed memory.

So you do not have a memory leak, you just discovered how CPython memory management works. The memory consumption does not grow without bounds.

like image 192
Anton Avatar answered Sep 29 '22 09:09

Anton