I want to resize postgres
container's shared memory from default 64M
. So I add:
build:
context: .
shm_size: '2gb'
I'm using version 3.6 of the compose file, postgres
service definition.
version: "3.6"
services:
#other services go here..
postgres:
restart: always
image: postgres:10
hostname: postgres
container_name: fiware-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
shm_size: '2gb'
However, this change doesn't take effect even though I restart the service docker-compose down
then up
. So Immediately I start interacting with postgres to display some data on dashboard, I get shared memory issue.
Before lunching the dashboard:
$docker exec -it fiware-postgres df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-107615-1541c55e4c3d5e03a7716d5418eea4c520b6556a6fd179c6ab769afd0ce64d9f 10G 266M 9.8G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/vda1 197G 52G 136G 28% /etc/hosts
shm 64M 8.0K 64M 1% /dev/shm
tmpfs 1.4G 0 1.4G 0% /proc/acpi
tmpfs 1.4G 0 1.4G 0% /proc/scsi
tmpfs 1.4G 0 1.4G 0% /sys/firmware
After lunching the dashboard:
$docker exec -it fiware-postgres df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-253:1-107615-1541c55e4c3d5e03a7716d5418eea4c520b6556a6fd179c6ab769afd0ce64d9f 10G 266M 9.8G 3% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.4G 0 1.4G 0% /sys/fs/cgroup
/dev/vda1 197G 52G 136G 28% /etc/hosts
shm 64M 50M 15M 78% /dev/shm
tmpfs 1.4G 0 1.4G 0% /proc/acpi
tmpfs 1.4G 0 1.4G 0% /proc/scsi
tmpfs 1.4G 0 1.4G 0% /sys/firmware
postgres error log:
2019-07-01 17:27:58.802 UTC [47] ERROR: could not resize shared memory segment "/PostgreSQL.1145887853" to 12615680 bytes: No space left on device
What's going on here?
You set shm_size
in build
, this will just affect build, you need to set it in service level, like next:
docker-compose.yaml:
version: "3.6"
services:
#other services go here..
postgres:
restart: always
image: postgres:10
hostname: postgres
container_name: fiware-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
volumes:
- ./postgres-data:/var/lib/postgresql/data
build:
context: .
shm_size: 256mb
shm_size: 512mb
Dockerfile:
FROM postgres:10
RUN df -h | grep shm
Then, docker-compose up -d --build
to start it and check:
shubuntu1@shubuntu1:~/66$ docker-compose --version
docker-compose version 1.24.0, build 0aa59064
shubuntu1@shubuntu1:~/66$ docker-compose up -d --build
Building postgres
Step 1/2 : FROM postgres:10
---> 0959974989f8
Step 2/2 : RUN df -h | grep shm
---> Running in 25d341cfde9c
shm 256M 0 256M 0% /dev/shm
Removing intermediate container 25d341cfde9c
---> 1637f1afcb81
Successfully built 1637f1afcb81
Successfully tagged postgres:10
Recreating fiware-postgres ... done
shubuntu1@shubuntu1:~/66$ docker exec -it fiware-postgres df -h | grep shm
shm 512M 8.0K 512M 1% /dev/shm
You can see in build time it shows 256m
, but the runtime container it shows 512m
.
This happened because Postgres wrote over 64MB to the shared memory (/dev/shm
under Linux). In default Linux settings, the max shared memory size is 64M.
Verification
0d807385d325:/usr/src# df -h | grep shm
shm 64.0M 0 64.0M 0% /dev/shm
services:
my_service:
....
tty: true
shm_size: '4mb'
0d807385d325:/usr/src# df -h | grep shm
shm 4.0M 0 4.0M 0% /dev/shm
# write 4MB (1kb * 4096) to /dev/shm/test file succeeds
0d807385d325:/usr/src# dd if=/dev/zero of=/dev/shm/test bs=1024 count=4096
4096+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0042619 s, 984 MB/s
# write 4.001MB (1kb * 4097) to /dev/shm/test file fails
0d807385d325:/usr/src# dd if=/dev/zero of=/dev/shm/test bs=1024 count=4097
dd: error writing '/dev/shm/test': No space left on device
4097+0 records in
4096+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0041456 s, 1.0 GB/s
Fix for this issue
We simply adjust this value to be a bigger value.
For docker run
, we can use the --shm-size
to adjust it (https://docs.docker.com/engine/reference/commandline/run/)
For docker-compose, we can use the shm_size
option in docker-compose file to adjust it like above (https://docs.docker.com/compose/compose-file/compose-file-v3/#shm_size)
For Kubernetes, we have to use the emptyDir
option in Kubernetes (https://kubernetes.io/docs/concepts/storage/volumes/#emptydir). Basically, we need to:
3.1) add a new emptyDir
volume with "Memory" as medium
volumes:
- name: dshm
emptyDir:
medium: Memory
3.2) Mount it to the /dev/shm
for the stonewave container
volumeMounts:
- mountPath: /dev/shm
name: dshm
According to Kubernetes's documentation (https://kubernetes.io/docs/concepts/storage/volumes/#emptydir), it will use 50% of the memory as the max by default.
If the SizeMemoryBackedVolumes feature gate is enabled, you can specify a size for memory backed volumes. If no size is specified, memory backed volumes are sized to 50% of the memory on a Linux host.
References
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With