I'm currently building an application that is intended to run on an embedded system hooked up to a cellular data card. I've been made aware of some low-data plans from several carriers, and our application only generates about 5 bytes/second, lending itself to such plans.
However, I can't seem to figure out if the TCP/IP header overhead (about 40 bytes, give or take) is included in the calculation for data usage. Since I need real-time data, I've disabled Nagle's algorithm. This means for each 5 byte burst I send out, I'm sending out a new header. If TCP/IP headers are factored into the data usage pricing, it will dwarf the amount of data I'm sending.
A TCP header always has one or more IP packets attached.
At the TCP layer, the packet again contains two parts: the TCP header and the TCP body.
TCP wraps each data packet with a header containing 10 mandatory fields totaling 20 bytes (or octets). Each header holds information about the connection and the current data being sent. The 10 TCP header fields are as follows: Source port – The sending device's port.
During communication of data the sender appends the header and passes it to the lower layer while the receiver removes header and passes it to upper layer. Headers are added at layer 6,5,4,3 & 2 while Trailer is added at layer 2.
I can't answer definitively, but I would assume they must. Otherwise this could be exploited by adding extra data to the headers. With TCP you send a 40 byte packet and then you receive a 40 byte acknowledgement packet. You could try using UDP instead of TCP so that you don't have to waste data with the acknowledgement packets.
According to an email from Sprint network engineering, "Any data that goes through our network, including network Header [sic.] would be billed or count towards your plan."
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With