There are a number of factors which will determine the maximum of size of a packet that can be sent on a Unix socket:
The wmem_max
socket send buffer maximum size kernel setting, which determines the maximum size of the send buffer that can be set using setsockopt (SO_SNDBUF)
. The current setting can be read from /proc/sys/net/core/wmem_max
and can be set using sysctl net.core.wmem_max=VALUE
(add the setting to /etc/sysctl.conf
to make the change persistent across reboots). Note this setting applies to all sockets and socket protocols, not just to Unix sockets.
If multiple packets are sent to a Unix socket (using SOCK_DATAGRAM), then the maximum amount of data which can be sent without blocking depends on both the size of the socket send buffer (see above) and the maximum number of unread packets on the Unix socket (kernel parameter net.unix.max_dgram_qlen
).
Finally, a packet (SOCK_DATAGRAM) requires contiguous memory (as per What is the max size of AF_UNIX datagram message that can be sent in linux?). How much contiguous memory is available in the kernel will depend on many factors (e.g. the I/O load on the system, etc...).
So to maximize the performance on your application, you need a large socket buffer size (to minimize the user/kernel space context switches due to socket write system calls) and a large Unix socket queue (to decouple the producer and consumer as much as possible). However, the product of the socket send buffer size and queue length must not be so large as to cause the kernel to run out of contiguous memory areas (causing write failures).
The actual figures will depend on your system configuration and usage. You will need to determine the limits by testing... start say with wmem_max
at 256Kb and max_dgram_qlen
at 32 and keep doubling wmem_max
until you notice things start breaking. You will need to adjust max_dgram_qlen
to balance the activity of the producer and consumer to a certain extent (although if the producer is much faster or much slower than the consumer, the queue size won't have much affect).
Note your producer will have to specifically setup the socket send buffer size to wmem_max
bytes with a call to setsockopt (SO_SNDBUF)
and will have to split data into wmem_max
byte chunks (and the consumer will have to reassemble them).
Best guess: the practical limits will be around wmem_max ~8Mb and unix_dgram_qlen ~32.
There are no "packets" per se with domain sockets. The semantics of tcp "streams" or udp "datagrams" are sort of simulated w/i the kernel to look similar to user space apps but that's about as far as it goes. The mechanics aren't as involved as network sockets using network protocols. What you are really interested in here is how much the kernel will buffer for you.
From your program's perspective it doesn't really matter. Think of the socket as a pipe or FIFO. When the buffer fills you are going to block; if the socket is non-blocking you are going to get short writes (assuming streams) or error with EAGAIN. This is true regardless of the size of the buffer. However you should be able query the buffer size with getsockopt
and to increase its size with setsockopt
but I doubt you are going to get anywhere near 10GB.
Alternatively, you might look at sendfile
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With