By default docker uses a shm size of 64m if not specified, but that can be increased in docker using --shm-size=256m
How should I increase shm size of a kuberenetes container or use --shm-size of docker in kuberenetes.
You can modify shm size by passing the optional parameter --shm-size to docker run command. The default is 64MB.
You can now use the Docker shm-size and tmpfs parameters in Amazon Elastic Container Service (Amazon ECS) task definitions. The shm-size parameter allows you to specify the shared memory that a container can use. It enables memory-intensive containers to run faster by giving more access to allocated memory.
In Kubernetes cluster (AWS EKS) you can change the ulimit for a docker container by modifying the /etc/docker/daemon. json in the node where your container is running.
Technically a container is an even smaller unit because a pod can contain multiple containers, but you can not deploy a single container, only a pod with one container. That's why the pod is considered the smallest unit in kubernetes.
I originally bumped into this post coming from google and went through the whole kubernetes issue and openshift workaround. Only to find the much simpler solution listed on another stackoverflow answer later.
added below line in deployment.yaml and container came up for me which was failing-
basically Mounting an emptyDir to /dev/shm and setting the medium to Memory
spec: containers: - name: solace-service image: solace-pubsub-standard:latest volumeMounts: - mountPath: /dev/shm name: dshm ports: - containerPort: 8080 volumes: - name: dshm emptyDir: medium: Memory
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With