Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

When messages get bigger, IpcChannel Remoting gets slower

I'm evaluating various interprocess communication methods for a couple of .NET 2.0 processes residing on the same machine. Naturally, .Net Remoting is a candidate, and theoretically the fastest configuration should be IpcChannel (named pipes) + BinaryFormatter.

My benchmarks really do show that Remoting over IpcChannel could mostly be faster than TcpChannel, but IpcChannel shows a steep drop in throughput as messages get bigger (around 30 MB):

Message Size    30 MB       3 MB        300 KB      3 KB
Remoting / TCP  120 MB/s    115.4 MB/s  109.5 MB/s  13.7 MB/s
Remoting / IPC  55 MB/s     223.3 MB/s  218.5 MB/s  20.3 MB/s

Does anyone have any idea why, or any idea how to optimize performance of either channel? I do need to pass 30 MB BLOBs around, and would like to avoid having to deal with shared memory / memory mapped files. Also, I can't afford writing these to disk (much slower).


The following method was used for the benchmarks (called repeatedly, measured total time, divided total payload size by total time).

private byte[] _bytes = null;

public byte[] HelloWorld(long size)
{
    if (_bytes == null || _bytes.Length != size)
        _bytes = new byte[size];
    return _bytes;
}
like image 346
Yodan Tauber Avatar asked Dec 02 '10 12:12

Yodan Tauber


3 Answers

A gun smaller then shared memory but still powerful enough for the job would be sockets. Upon executing the remote procedure, have it create a Listening socket on some fixed or ad-hoc port number, connect from the client to it, use NetworkStream to write data from one side to another.

It will work like a charm, I'm sure.

This article should get you started.

And, even though you are not mentioning anything about having to have server and client ond separate machines, you'll still have that ability, which will vanish if you use shared memory.

like image 58
Daniel Mošmondor Avatar answered Nov 19 '22 11:11

Daniel Mošmondor


Why do you want to avoid shared memory? It is the most obvious choice for moving large BLOBs.

like image 33
zvrba Avatar answered Nov 19 '22 11:11

zvrba


The "strange" behaviour for big messages sizes (30MB) does most certainly orginate from GC pressure. By the way BinaryFormatter should be the slowest of all possible formatters. DataContractFormatter might be much better or a hand written one like this beauty http://codebetter.com/blogs/gregyoung/archive/2008/08/24/fast-serialization.aspx should be about 16 times faster. How did you measure the times? Was the sending and receiving process the same one? I think 120 MB/s send receive are quite good for .net with a very busy garbage collector. You should have a look a the % GC Time Performance counter to check if it is high. If it is > 95% you should use memory more sparingly. As other commenters have already pointed out memory mapped files are the way to go if you need to pass huge amounts of data between processes. There are many free implementations around like

http://www.codeproject.com/KB/recipes/MemoryMappedGenericArray.aspx

and

http://msdn.microsoft.com/en-us/library/ff650497.aspx (Smart Client Offline Application block has one dll which does contain a nice implementation).

Yours, Alois Kraus

like image 33
Alois Kraus Avatar answered Nov 19 '22 11:11

Alois Kraus