Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to prevent packet fragmentation for a HttpWebRequest

I am having a problem using HttpWebRequest against a HTTP daemon on an embedded device. The problem appears to be that there is enough of a delay between the http headers being written to the socket stream, and the http payload (a POST), that the socket releases what's in the socket buffer to the server. This results in the HTTP request being split over two packets (fragmentation).

This is perfectly valid, of course, however the server the other end doesn't cope with it if the packets are split by more than about 1.8ms. So I am wondering if there are any realistic ways to control this (on the client).

There do not appear to be any properties on HttpWebRequest that give this level of control over the socket used for the send, and one can't appear to access the socket itself (ie via reflection) because it is only created during the send, and released afterwards (as part of the outbound http connection pooling stuff). The BufferWriteStream property just buffers the body content within the webrequest (so it's still available for redirects etc...), and doesn't appear to affect the way the overall request is written to the socket.

So what to do?

(I'm really trying to avoid having to re-write the HTTP client from the socket up)

One option might be to write some kind of proxy that the HttpWebRequest sends to (maybe via the ServicePoint), and in that implementation buffer the entire TCP request. But that seems like a lot of hard work.

It also works fine when I'm running Fidder (for the same reason) but that's not really an option in our production environment...

[ps: I know it's definately the interval between the fragmented packets that's the problem, because I knocked up a socket-level test where I explicitly controlled the fragmentation using a NoDelay socket]

like image 348
piers7 Avatar asked Feb 05 '10 07:02

piers7


People also ask

How do you avoid network fragmentation?

In order to avoid IP fragmentation, you must determine the size of the IP packets to send over the network. There are two approaches that are generally used; path MTU discovery and setting maximum segment size (MSS). Path MTU Discovery – This technique is used to identify MTU end-to-end to prevent packet fragmentation.

How do you deal with packet fragmentation?

Preventing Fragmentation A node can prevent packets being fragmented by setting the Don't Fragment (DF) flag in those packets to a value of 1. Packets that must be fragmented but have the DF bit set are discarded.

What causes packet fragmentation?

If the packet is too big to travel in between two routing devices, it gets broken into fragments. These fragments look like IP packets in their own right and can traverse the network. They are reassembled when they reach their destination.

Can a packet be fragmented twice?

Yes, fragmented packets, going further along path, can be fragmented again.


1 Answers

In the end the vendor pushed out a firmware upgrade that included a new version of HTTPD and the problem went away. They were using BusyBox linux, and apparently there was some other problem with the HTTPD implementation that they had suffered from.

In terms of my original question, I don't think there is any reliable way of doing it, apart from writing a socket proxy. Some of the workarounds I played with above worked by luck not design (because they meant .net sent the whole packet in one go).

like image 176
piers7 Avatar answered Oct 22 '22 00:10

piers7