I am developing an application in C#, using the server-client model, where the server sends a byte array with a bitmap to the client, the client loads it into the screen, sends an "OK" to the server, and the server sends another image, and so on.
The length of the image buffer deppends, usually it is between 60kb and 90kb, but I've seen that it doesn't matter. If I put the client and the server in the same computer, using localhost, everything works fine. The server does beginSend, and the client does endReceive and the whole buffer is transmitted.
However, I am now testing this in a wireless network and what happens is:
It's always like this, a first packet with 1460 bytes is received, and then the second packet contains the rest.
I can work around this by joining both byte arrays received, but this doesn't seem right. I'm not even sure why this is happening. Is it some restriction on the network? Why then doesn't C# wait for the whole data to be transmitted? I mean, it's TCP, I shouldn't have to worry about it, right?
Anyway, any help would be great!
Cheers
It's TCP - you should treat the data as a stream. You shouldn't care how the stream is broken up into packets, or make assumptions about it.
If you need to receive a single "block" of data, the simplest way to do that reliably is to prefix it with the length (e.g. as a 32-bit value). You read the length (noting that even those bytes could be split across multiple packets) and then repeatedly read (whether synchronously or asynchronously) taking note of how much you read each time, until you've read all the data.
Have a read of 9.2.4
When dissecting an application layer protocol you cannot assume that each TCP packet contains exactly one application layer message. One application layer message can be split into several TCP packets.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With