I'm programming a C/C++ client/server sockets application. At this point, the client connects itselfs to the server every 50ms and sends a message.
Everything seems to works, but the data flow is not continuous: Suddenly, the server doesn't receives anything more, and then 5 messages at once... And sometimes everything works...
Has someone an idea of the origin of this strange behaviour ?
Some part of the code:
Client:
while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket = new socket();
socket->write("blah");
message.clear();
message = socket->read();
socket->close();
delete socket;
time=SDL_GetTicks();
}
}
Server:
while (true) {
fd_set readfs;
struct timeval timeout={0,0};
FD_ZERO(&readfs);
FD_SET(sock, &readfs);
select(sock + 1, &readfs, NULL, NULL, &timeout)
if(FD_ISSET(sock, &readfs))
{
SOCKADDR_IN csin;
socklen_t crecsize = sizeof csin;
SOCKET csock = accept(sock, (SOCKADDR *) &csin, &crecsize);
sock_err = send(csock, buffer, 32, 0);
closesocket(csock);
}
}
Edits:
1. I tried to do
int flag = 1;
setsockopt(socket, IPPROTO_TCP, TCP_NODELAY, &flag, sizeof flag);
In both client and server, but the problem is still there.
2.Yes those connections/deconnections are very inneficient, but when I try to write
socket = new socket();
while (true)
{
if (SDL_GetTicks()-time>=50)
{
socket->write("blah");
message.clear();
message = socket->read();
time=SDL_GetTicks();
}
}
Then the message is only sent once (or received)...
Finally:
I had forgotten to apply TCP_NODELAY to the client socket on the server side. Now it works perfectly ! I put the processes in threads so that the sockets keep open. Thank you all :)
This is what called "Nagle delay". This algorithm is waiting on TCP stack for more data to arrive before actually sending anything to network untill some timeout expires. So you should modify the Nagle timeout (http://fourier.su/index.php?topic=249.0) or disable Nagle delay at all (http://www.unixguide.net/network/socketfaq/2.16.shtml), so data will be sent per send
call.
As others already replied the delays you see are due to TCP built-in Nagle algorithm, which can be disabled by setting TCP_NODELAY
socket option.
I would like to point you to the fact that your socket communications are very inefficient due to constant connects and disconnects. Every time client connects to the server there's the three way handshake that takes place, and connection tear-down requires four packets to complete. Basically you lose most of the benefits of TCP but incur all of its drawbacks.
It would be much more efficient for each client to maintain persistent connection to the server. select(2)
, or even better, epoll(4)
on Linux, or kqueue(2)
on FreeBSD and Mac, are very convenient frameworks for handling IO on multiple sockets.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With