Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level protocol? I looked around the usual headers but couldn't find anything.
TCP_NODELAY ... setting this option forces an explicit flush of pending output ... This should work with linux at least; when you set TCP_NODELAY it flushes the current buffer, which should, according to the source comments, even flush when TCP_CORK is set. I confirm that it works.
A POSIX Socket or simply a Socket is defined as a communication endpoint. For example, if two parties, A and B, intend to communicate with each other, then it will be required that both of these parties establish a connection between their respective endpoints.
You can re-use the socket handle. Once you call close , regardless of what happens to the actual socket, the handle can be reused. Your next call to socket or accept may get back the same handle. (But don't assume it does, just store the handle.)
A socket has two buffers and some other information associated with it. In the context of sockets programming, a socket is your app's interface to one TCP connection (or UDP flow). Your app doesn't read/write data from/to the network interface card (NIC) directly, it goes through the kernel's network stack.
What about setting TCP_NODELAY and than reseting it back? Probably it could be done just before sending important data, or when we are done with sending a message.
send(sock, "notimportant", ...); send(sock, "notimportant", ...); send(sock, "notimportant", ...); int flag = 1; setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int)); send(sock, "important data or end of the current message", ...); flag = 0; setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
As linux man pages says
TCP_NODELAY ... setting this option forces an explicit flush of pending output ...
So probably it would be better to set it after the message, but am not sure how it works on other systems
For Unix-domain sockets, you can use fflush()
, but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:
At the end of your session, calling shutdown(sock, SHUT_WR)
to close out writes on the socket.
On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY
, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.
It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With