Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

WCF HttpTransport: streamed vs buffered TransferMode

Tags:

I have a self-hosted WCF service (v4 framework) that is exposed through a HttpTransport-based custom binding. The binding uses a custom MessageEncoder that is pretty much a BinaryMessageEncoder with the addition of gzip compression functionality.

A Silverlight and a Windows client consume the web service.

Problem: in some cases the service had to return very large objects and occasionally threw OutOfMemory exceptions when responding to several concurrent requests (even if Task Manager reported ~600 Mb for the process). The exception happened in the custom encoder, when the message was about to be compressed, but I believe this was just a symptom and not the cause. The exception stated "failed to allocate x Mb" where x was 16, 32 or 64, not a overly huge amount -for this reason I believe something else already put the process near some limit before that.

The service endpoint is defined as follows:

var transport = new HttpTransportBindingElement(); // quotas omitted for simplicity var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport); 

Then I did an experiment: I changed TransferMode from Buffered to StreamedResponse (and modified the client accordingly). This is the new service definition:

var transport = new HttpTransportBindingElement() {     TransferMode = TransferMode.StreamedResponse // <-- this is the only change }; var binaryEncoder = new BinaryMessageEncodingBindingElement(); // Readerquotas omitted for simplicity var customBinding = new CustomBinding(new GZipMessageEncodingBindingElement(binaryEncoder), transport); 

Magically, no OutOfMemory exceptions anymore. The service is a bit slower for small messages, but the difference gets smaller and smaller as message size grows. The behavior (both for speed and OutOfMemory exceptions) is reproducible, I did several tests with both configurations and these results are consistent.

Problem solved, BUT: I cannot explain myself what is happening here. My surprise stems from the fact that I did not change the contract in any way. I.e. I did not create a contract with a single Stream parameter, etc., as you usually do for streamed messages. I am still using my complex classes with the same DataContract and DataMember attribute. I just modified the endpoint, that's all.

I thought that setting TransferMode was just a way to enable streaming for properly formed contracts, but obviously there is more than that. Can anybody explain what actually happens under the hood when you change TransferMode?

like image 859
Francesco De Vittori Avatar asked Oct 28 '10 14:10

Francesco De Vittori


2 Answers

As you use 'GZipMessageEncodingBindingElement', I assume you are using the MS GZIP sample.

Have a look at DecompressBuffer() in GZipMessageEncoderFactory.cs and you will understand what's going on in buffered mode.

For the sake of example, let's say you have a message of uncompressed size 50M, compressed size 25M.

DecompressBuffer will receive an 'ArraySegment buffer' param of (1) 25M size. The method will then create a MemoryStream, uncompress the buffer into it, using (2) 50M. Then it will do a MemoryStream.ToArray(), copying the memory stream buffer into a new (3) 50M big byte array. Then it takes another byte array from the BufferManager of AT LEAST (4) 50M+, in reality, it can be a lot more - in my case it was always 67M for a 50M array.

At the end of DecompressBuffer, (1) will be returned to the BufferManager (which seems to never get cleared by WCF), (2) and (3) are subject to GC (which is async, and if you are faster than the GC, you might get OOM exceptions even though there would be enough mem if cleaned up). (4) will presumably be given back to the BufferManager in your BinaryMessageEncodingBindingElement.ReadMessage().

To sum up, for your 50M message, your buffered scenario will temporarily take up 25 + 50 + 50 + e.g. 65 = 190M memory, some of it subject to asynchronous GC, some of it managed by the BufferManager, which - worst case - means it keeps lots of unused arrays in memory that are neither usable in a subsequent request (e.g. too small) nor eligible for GC. Now imagine you have multiple concurrent requests, in that case BufferManager will create separate buffers for all concurrent requests, which will never be cleaned up, unless you manually call BufferManager.Clear(), and I don't know of a way to do that with the buffer managers used by WCF, see also this question: How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory? ]

Update: After migrating to IIS7 Http Compression ( wcf conditional compression) memory consumption, cpu load and startup time dropped (don't have the numbers handy) and then migrating from buffered to streamed TransferMode ( How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?) memory consumption of my WCF client app has dropped from 630M (peak) / 470M (continuous) to 270M (both peak and continuous)!

like image 142
Evgeniy Berezovsky Avatar answered Sep 20 '22 11:09

Evgeniy Berezovsky


I've had some experience with WCF and streaming.

Basically, if you don't set the TransferMode to streamed, then it'll default to buffered. So if you are sending large pieces of data, it's going to build up the data on your end in memory and then send it once all the data is loaded and ready to be sent. This is why you were getting out of memory errors because the data was very large and more than your machine's memory.

Now if you use streamed, then it'll immediately start sending chunks of data to the other endpoint instead of buffering it up, making memory usage very minimal.

But this doesn't mean that the receiver also has to be set up for streaming. They could be setup to buffer and will experience the same problem as the sender did if they do not have sufficient memory for your data.

For the best results, both endpoints should be setup to handle streaming (for large data files).

Typically, for streaming, you use MessageContracts instead of DataContracts because it gives you more control over the SOAP structure.

See these MSDN articles on MessageContracts and Datacontracts for more info. And here is more info about Buffered vs Streamed.

like image 29
Bryan Denny Avatar answered Sep 19 '22 11:09

Bryan Denny