Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance of ReceiveAsync vs. BeginReceive

I'm currently programming a client application and I'm wondering whether I should use the Socket class' ReceiveAsync or BeginReceive method. I have been using the latter so far, however, I found that it seems to stress the CPU quite a bit. Here is what my receive loop basically looks like:

private void socket_ReceiveCallback(IAsyncResult result_)
{
    // does nothing else at the moment
    socket.EndReceive(result_);
    byte[] buffer = (byte[])result_.AsyncState;

    // receive new packet
    byte[] newBuffer = new byte[1024];
    socket.BeginReceive(newBuffer, 0, newBuffer.Length, SocketFlags.None, 
                        socket_ReceiveFallback, newBuffer);
}

Now I've been wondering if I am doing something wrong here, since other applications that communicate hardly stress the CPU at all. And also I'm wondering if I would be better off with using SocketAsyncEventArgs and ReceiveAsync.

So here are my questions:

Why is my loop stressing the CPU so much? Should I use SocketAsyncEventArgs and ReceiveAsync instead of BeginReceive?

like image 896
haiyyu Avatar asked Mar 28 '12 20:03

haiyyu


4 Answers

BeginReceive and EndReceive are remnants of the old legacy asynchronous pattern that were used before the introduction of the modern async and await keywords in C# 5.

So you should prefer to use ReceiveAsync over BeginReceive and EndReceive for asynchronous programming.

For really high performance scenarios you should use SocketAsyncEventArgs. This was designed for high performance and is used by the Kestrel web server.

From the remarks section for the SocketAsyncEventArgs documentation

The SocketAsyncEventArgs class is part of a set of enhancements to the System.Net.Sockets.Socket class that provide an alternative asynchronous pattern that can be used by specialized high-performance socket applications. This class was specifically designed for network server applications that require high performance. An application can use the enhanced asynchronous pattern exclusively or only in targeted hot areas (for example, when receiving large amounts of data).

The main feature of these enhancements is the avoidance of the repeated allocation and synchronization of objects during high-volume asynchronous socket I/O. The Begin/End design pattern currently implemented by the System.Net.Sockets.Socket class requires a System.IAsyncResult object be allocated for each asynchronous socket operation.

In the new System.Net.Sockets.Socket class enhancements, asynchronous socket operations are described by reusable SocketAsyncEventArgs objects allocated and maintained by the application. High-performance socket applications know best the amount of overlapped socket operations that must be sustained. The application can create as many of the SocketAsyncEventArgs objects that it needs. For example, if a server application needs to have 15 socket accept operations outstanding at all times to support incoming client connection rates, it can allocate 15 reusable SocketAsyncEventArgs objects for that purpose.

like image 194
Fred Avatar answered Nov 16 '22 11:11

Fred


I did a comparative for max-load, the results in GBs (giga-bytes per second):

  • ReceiveAsync: ~1,2GBs
  • BeginReceive: ~1,1GBs
  • Receive (in a thread loop): ~1,4GBs

Notes:

  • All results was made using loopback address (localhost) and using a thread for the send socket
  • 8192 bytes for buffer size

For a big-load transfer I would suggest using the Receive in a thread but for better CPU performance with various connections would use ReceiveAsync or BeginReceive.

like image 30
Magus Avatar answered Sep 18 '22 03:09

Magus


I have been benchmarking synchronous vs. asynchronous socket on on a localhost loopback connection. My results were that the asynchronous version was about 30% slower. That was surprising to me considering that async IO is all the rage now. It didn't matter how many threads I used. I could use 128 threads and still synchronous IO was faster.

The reason for that is, I believe, that async IO requires more allocations and more kernel mode transitions.

So you could just switch to synchronous IO, if you don't expect hundreds of simultaneous connections.

like image 7
usr Avatar answered Nov 16 '22 11:11

usr


do answer this you'd have to profile your application. What I wonder is

  • why I see no EndReceive
  • why you don't use the received buffer at all and
  • why you allocate new buffers time and time again - this is the only opperation here that should take any resources (CPU/memory)

Have a look at this: http://msdn.microsoft.com/de-de/library/dxkwh6zw.aspx

like image 1
Random Dev Avatar answered Nov 16 '22 13:11

Random Dev