I am using following code in Centos to change raw socket buffer size to 400 KB, however I got same result as I set buffer size to 256 KB. Anything wrong? or this is the limitation of socket layer? The kernel version is 2.6.34. Thanks!
int rawsock;
socklen_t socklen;
int optval;
int bufsize = 400 * 1024;
rawsock = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (rawsock < 0) {
my_log(LOG_ERR, "error creating raw socket");
return rawsock;
}
optval = 0;
socklen = 4;
err = getsockopt(rawsock, SOL_SOCKET, SO_RCVBUF, &optval, &socklen);
bail_error(err);
my_log("socket RX original buffer size = %d", optval);
optval = 0;
socklen = 4;
err = getsockopt(rawsock, SOL_SOCKET, SO_SNDBUF, &optval, &socklen);
bail_error(err);
my_log("socket TX original buffer size = %d", optval);
err = setsockopt(rawsock, SOL_SOCKET, SO_RCVBUF, &bufsize, sizeof(bufsize));
bail_error(err);
err = setsockopt(rawsock, SOL_SOCKET, SO_SNDBUF, &bufsize, sizeof(bufsize));
bail_error(err);
optval = 0;
socklen = 4;
err = getsockopt(rawsock, SOL_SOCKET, SO_RCVBUF, &optval, &socklen);
bail_error(err);
my_log("socket RX new buffer size = %d", optval);
optval = 0;
socklen = 4;
err = getsockopt(rawsock, SOL_SOCKET, SO_SNDBUF, &optval, &socklen);
bail_error(err);
my_log("socket TX new buffer size = %d", optval);
After running, the result is:
socket RX original buffer size = 110592
socket TX original buffer size = 110592
socket RX new buffer size = 524288
socket TX new buffer size = 524288
The default value is 87380 bytes. (On Linux 2.4, this will be lowered to 43689 in low-memory systems.) If larger receive buffer sizes are desired, this value should be increased (to affect all sockets).
The maximum send buffer size is 1,048,576 bytes. The default value of the SO_SNDBUF option is 32767. For a TCP socket, the maximum length that you can specify is 1 GB.
Buffer sizes for the socket connections between the web server and WebSphere Application Server are set at 64KB. In most cases this value is adequate.
The maximum size is 8 MB (8096 KB). The optimal buffer size depends on several network environment factors including types of switches and systems, acknowledgment timing, error rates and network topology, memory size, and data transfer size.
You're just hitting your system's current sysctl limits net.core.wmem_max
and net.core.rmem_max
.
If the process has superuser privileges, it can use the SO_SNDBUFFORCE
and SO_RCVBUFFORCE
ioctls to override the limits. If there is a real reason why your service does require larger buffers -- that is, any other reason besides poor development or design choices --, then I recommend this way. Usually there is no such reason, in which case I recommend you fix the application/service code instead.
You can modify the limits system-wide, but they'll affect all processes. Normally the defaults work fine, but in some specialized cases (embedded servers with very wide but long-latency network connections, perhaps?) you might wish to modify them.
To do this temporarily (until next boot), run sysctl -w net.core.wmem_max=bytes
and sysctl -w net.core.rmem_max=bytes
as root (where bytes
is the new limit as a decimal number, in bytes).
To make the changes permanent, add
net.core.rmem_max=bytes
net.core.wmem_max=bytes
to your /etc/sysctl.conf
file, or a new file in /etc/sysctl.d/
directory if your Linux distribution provides one. The latter is better approach, because it won't stop updates to your default configuration files.
If you want to delve deeper into these and other socket ioctls, you can take a look at the kernel net/core/sock.c
file, and the sock_setsockopt()
function therein.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With