Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker does not free memory after creating and deleting files with PHP

I have a PHP daemon script downloading remote images and storing them local temporary before uploading to object storage.

PHP internal memory usage remains stable but the memory usage reported by Docker/Kubernetes keeps increasing.

I'm not sure if this is related to PHP, Docker or expected Linux behavior.

Example to reproduce the issue:

Docker image: php:7.2.2-apache

<?php
for ($i = 0; $i < 100000; $i++) {
    $fp = fopen('/tmp/' . $i, 'w+');
    fclose($fp);

    unlink('/tmp/' . $i);

    unset($fp);
}

Calling free -m inside container before executing the above script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2276         139          38        1513        1311
Swap:          1023         167         856

And after executing the script:

          total        used        free      shared  buff/cache   available
Mem:           3929        2277         155          38        1496        1310
Swap:          1023         167         856

Apperantly the memory is released but calling docker stats php-apache from host indicate something other:

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
ccc19719078f        php-apache          0.00%               222.1MiB / 3.837GiB   5.65%               1.21kB / 0B         1.02MB / 4.1kB      7

The initial memory usage reported by docker stats php-apache was 16.04MiB.

What is the explanation? How do I free the memory?

Having this contianer running in a Kubernetes cluster with resource limits causes the pod to fail and restart repeatedly.

like image 227
mpskovvang Avatar asked Mar 06 '23 17:03

mpskovvang


1 Answers

Yes, a similar issue has been reported here.

Here's the answer of coolljt0725, one of the contributors, answering why a RES column in top output shows something different, than docker stats (I'm just gonna quote him as is):

If I understand correctly, the memory usage in docker stats is exactly read from containers's memory cgroup, you can see the value is the same with 490270720 which you read from cat /sys/fs/cgroup/memory/docker/665e99f8b760c0300f10d3d9b35b1a5e5fdcf1b7e4a0e27c1b6ff100981d9a69/memory.usage_in_bytes, and the limit is also the memory cgroup limit which is set by -m when you create container. The statistics of RES and memory cgroup are different, the RES does not take caches into account, but the memory cgroup does, that's why MEM USAGE in docker stats is much more than RES in top

What a user suggested here might actually help you to see the real memory consumption:

Try set the param of docker run --memory,then check your /sys/fs/cgroup/memory/docker/<container_id>/memory.usage_in_bytes It should be right.

--memory or -m is described here:

-m, --memory="" - Memory limit (format: <number>[<unit>]). Number is a positive integer. Unit can be one of b, k, m, or g. Minimum is 4M.

And now how to avoid the unnecessary memory consumption. Just as you posted, unlinking a file in PHP does not necessary drop memory cache immediately. Instead, running the Docker container in privileged mode (with --privileged flag) it is then possible to call echo 3 > /proc/sys/vm/drop_caches or sync && sysctl -w vm.drop_caches=3 periodcally to clear the memory pagecache.

And as a bonus, using fopen('php://temp', 'w+') and storing the file temporary in memory avoids the entire issue.

like image 180
Alex Karshin Avatar answered Mar 10 '23 10:03

Alex Karshin