When working with Sockets in Java, how can you tell whether the client has finished sending all (binary) data, before you could start processing them. Consider for example:
istream = new BufferedInputStream (socket.getInputStream());
ostream = new BufferedOutputStream(socket.getOutputStream());
byte[] buffer = new byte[BUFFER_SIZE];
int count;
while(istream.available() > 0 && (count = istream.read(buffer)) != -1)
{
// do something..
}
// assuming all input has been read
ostream.write(getResponse());
ostream.flush();
I've read similar posts on SO such as this, but couldn't find a conclusive answer. While my solution above works, my understanding is that you can never really tell if the client has finished sending all data. If for instance the client socket sends a few chunks of data and then blocks waiting for data from another data source before it could send more data, the code above may very well assume that the client has finished sending all data since istream.available() will return 0 for the current stream of bytes.
Keep a single server socket outside of the loop -- the loop needs to start before accept(). Just put the ServerSocket creation into a separate try/catch block. Otherwise, you'll open a new socket that will try to listen on the same port, but only a single connection has been closed, not the serverSocket.
Stream sockets allow processes to communicate using TCP. A stream socket provides bidirectional, reliable, sequenced, and unduplicated flow of data with no record boundaries. After the connection has been established, data can be read from and written to these sockets as a byte stream.
You never got a null because the peer never closed the connection. That's what 'end of stream' means. It doesn't mean 'no more data for the time being'.
Yes, you're right - using available()
like this is unreliable. Personally I very rarely use available()
. If you want to read until you reach the end of the stream (as per the question title), keep calling read()
until it returns -1. That's the easy bit. The hard bit is if you don't want the end of the stream, but the end of "what the server wants to send you at the moment."
As the others have said, if you need to have a conversation over a socket, you must make the protocol explain where the data finishes. Personally I prefer the "length prefix" solution to the "end of message token" solution where it's possible - it generally makes the reading code a lot simpler. However, it can make the writing code harder, as you need to work out the length before you send anything. This is a pain if you could be sending a lot of data.
Of course, you can mix and match solutions - in particular, if your protocol deals with both text and binary data, I would strongly recommend length-prefixing strings rather than null-terminating them (or anything similar). Decoding string data tends to be a lot easier if you can pass the decoder a complete array of bytes and just get a string back - you don't need to worry about reading to half way through a character, for example. You could use this as part of your protocol but still have overall "records" (or whatever you're transmitting) with an "end of data" record to let the reader process the data and respond.
Of course, all of this protocol design stuff is moot if you're not in control of the protocol :(
I think this is the task more of a protocol, assuming that you are the man who writes both the transmitting and receiving sides of application. For example you could implement some simple logic protocol and divide you data into packets. Then divide packets into two parts: the head and the body. And then to say that your head consists of a predefined starting sequence and contains number of bytes in the body. Of forget about starting sequence and simpy transfer number of bytes in the bofy as a first byte of the packet. Then you've could solve you problem.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With