Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Faster detection of a broken socket in Java/Android

Background

My application gathers data from the phone and sends the to a remote server.
The data is first stored in memory (or on file when it's big enough) and every X seconds or so the application flushes that data and sends it to the server.

It's mission critical that every single piece of data is sent successfully, I'd rather send the data twice than not at all.

Problem

As a test I set up the app to send data with a timestamp every 5 seconds, this means that every 5 seconds a new line appear on the server.
If I kill the server I expect the lines to stop, they should now be written to memory instead.

When I enable the server again I should be able to confirm that no events are missing.

The problem however is that when I kill the server it takes about 20 seconds for IO operations to start failing meaning that during those 20 seconds the app happily sends the events and removes them from memory but they never reach the server and are lost forever.

I need a way to make certain that the data actually reaches the server.

This is possibly one of the more basic TCP questions but non the less, I haven't found any solution to it.

Stuff I've tried

  • Setting Socket.setTcpNoDelay(true)
  • Removing all buffered writers and just using OutputStream directly
  • Flushing the stream after every send

Additional info

I cannot change how the server responds meaning I can't tell the server to acknowledge the data (more than mechanics of TCP that is), the server will just silently accept the data without sending anything back.

Snippet of code

Initialization of the class:

socket = new Socket(host, port);
socket.setTcpNoDelay(true);

Where data is sent:

while(!dataList.isEmpty()) {
    String data = dataList.removeFirst();
    inMemoryCount -= data.length();
    try {
        OutputStream os = socket.getOutputStream();
        os.write(data.getBytes());
        os.flush();
    }
    catch(IOException e) {
        inMemoryCount += data.length();
        dataList.addFirst(data);
        socket = null;
        return false;
    }
}

return true;

Update 1

I'll say this again, I cannot change the way the server behaves.
It receive data over TCP and UPD and does not send any data back to confirm the receive. This is a fact and sure in a perfect world the server would acknowledge the data but that will simply not happen.


Update 2

The solution posted by Fraggle works perfect (closing the socket and waiting for the input stream to be closed).

This however comes with a new set of problems.
Since I'm on a phone I have to assume that the user cannot send an infinite amount of bytes and I would like to keep all data traffic to a minimum if possible.

I'm not worried by the overhead of opening a new socket, those few bytes will not make a difference. What I am worried about however is that every time I connect to the server I have to send a short string identifying who I am.

The string itself is not that long (around 30 characters) but that adds up if I close and open the socket too often.

One solution is only to "flush" the data every X bytes, the problem is I have to choose X wisely; if too big there will be too much duplicate data sent if the socket goes down and if it's too small the overhead is too big.


Final update

My final solution is to "flush" the socket by closing it every X bytes and if all didn't got well those X bytes will be sent again.

This will possibly create some duplicate events on the server but that can be filtered there.

like image 562
Nicklas A. Avatar asked Jun 11 '11 22:06

Nicklas A.


2 Answers

Dan's solution is the one I'd suggest right after reading your question, he's got my up-vote.

Now can I suggest working around the problem? I don't know if this is possible with your setup, but one way of dealing with badly designed software (this is your server, sorry) is to wrap it, or in fancy-design-pattern-talk provide a facade, or in plain-talk put a proxy in front of your pain-in-the-behind server. Design meaningful ack-based protocol, have the proxy keep enough data samples in memory to be able to detect and tolerate broken connections, etc. etc. In short, have the phone app connect to a proxy residing somewhere on a "server-grade" machine using "good" protocol, then have the proxy connect to the server process using the "bad" protocol. The client is responsible for generating data. The proxy is responsible for dealing with the server.

Just another idea.

Edit 0:

You might find this one entertaining: The ultimate SO_LINGER page, or: why is my tcp not reliable.

like image 87
Nikolai Fetissov Avatar answered Oct 26 '22 14:10

Nikolai Fetissov


The bad news: You can't detect a failed connection except by trying to send or receive data on that connection.

The good news: As you say, it's OK if you send duplicate data. So your solution is not to worry about detecting failure in less than the 20 seconds it now takes. Instead, simply keep a circular buffer containing the last 30 or 60 seconds' worth of data. Each time you detect a failure and then reconnect, you can start the session by resending that saved data.

(This could get to be problematic if the server repeatedly cycles up and down in less than a minute; but if it's doing that, you have other problems to deal with.)

like image 35
Dan Breslau Avatar answered Oct 26 '22 15:10

Dan Breslau