We have an Python uWSGI REST API server which handles a lot of calls. When the api calls peak through an external resource, the queue is immediately filled, because the uWSGI queue size set to 100 by default. After some digging we found that this is according to the net.core.somaxconn setting of the server. And in the case of Kubernetes because of the setting of the node.
We found this documentation to use sysctl to change net.core.somaxconn. https://kubernetes.io/docs/concepts/cluster-administration/sysctl-cluster/ But that's not working on GKE as it requires docker 1.12 or newer.
We also found this snippet but that seems really hacky. https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/sysctl/change-proc-values-rc.yaml Wouldn't a DaemonSet be better instead of a companion container?
What would be the best practice to set net.core.somaxconn higher than the default on all nodes of a nodepool?
A good approach can be using a daemonset with privileges, due the fact it will run on all existing and new nodes. Just use provided startup container such as:
https://github.com/kubernetes/contrib/blob/master/startup-script/startup-script.yml
For your case:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: startup
spec:
updateStrategy:
type: RollingUpdate
template:
spec:
hostPID: true
containers:
- name: system-tweak
image: gcr.io/google-containers/startup-script:v1
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
echo 32768 > /proc/sys/net/core/somaxconn
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With