We have a Linux project where we are pushing struct information over buffers. Recently, we found that the kernel parameter optmem_max was too small. I was asked to increase this by a supervisor. While I understand how to do this, I don't really understand how I know how big to make this.
Further, I don't really get what optmem_max is.
Here's what the kernel documentation says:
"Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence of struct cmsghdr structures with appended data."
(I don't really understand what this means in English).
I see many examples on the Internet suggesting that this should be increased for better performance.
In:
/etc/sysctl.conf
I added this line to fix the problem:
net.core.optmem_max=1020000
Once this is done, we got better performance.
So to summarize my question:
In English, what is optmem_max?
Why is it so low by default in most Linux distros if making it bigger improves performance?
How does one measure what a good size for this number to be?
What are the ramifications of making this really large?
Aside from /etc/sysctl.conf, where is this set in the kernel by default? I grepped the kernel, but I could find no trace of the default value of optmem_max being set to 20480 which is the default on our system.
optmem_max is a kernel option that affects the memory allocated to the cmsg list maintained by the kernel that contains "extra" packet information like SCM_RIGHTS or IP_TTL .
The default value is 87380 bytes. (On Linux 2.4, this will be lowered to 43689 in low-memory systems.) If larger receive buffer sizes are desired, this value should be increased (to affect all sockets).
wmem_max - Defines the maximum send window size. echo '2097152' > /proc/sys/net/core/wmem_max. Add this line to the /etc/sysctl.conf file: net.core.wmem_max = 2097152. Set device txqueuelen - Sets each network device, for example, eth0, eth1, and on.
Linux. In Linux, the sysctl interface mechanism is also exported as part of procfs under the /proc/sys directory (not to be confused with the /sys directory).
- In English, what is optmem_max?
optmem_max
is a kernel option that affects the memory allocated to the cmsg
list maintained by the kernel that contains "extra" packet information like SCM_RIGHTS
or IP_TTL
.
Increasing this option allows the kernel to allocate more memory as needed for more control messages that need to be sent for each socket connected (including IPC sockets/pipes).
- Why is it so low by default in most Linux distros if making it bigger improves performance?
Most distributions have normal users in mind and most normal users, even if using Linux/Unix as a server, do not have a farm of servers that have fiber channels between them or server processes that don't need GB of IPC transfer.
A 20KB buffer is large enough for "most" that it minimizes the kernel memory required by default and is also easily enough configured that one can do so if they need.
- How does one measure what a good size for this number to be?
Depends on your usage, but the Arch Wiki suggests a 64KB size for optmem_max
and a 16MB size for rmem_max
and wmem_max
(which are the send and receive buffers).
- What are the ramifications of making this really large?
More kernel memory that can be allocated to each socket connected, and maybe unnecessarily.
- Aside from /etc/sysctl.conf, where is this set in the kernel by default? I grepped the kernel, but I could find no trace of the default value of optmem_max being set to 20480 which is the default on our system.
I'm not a Linux kernel source aficionado, but it looks like it could be in net/core/sock.c:318
.
Hope that can help.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With