I'm considering to move parts of a .Net application to other computers. The obvious way to do this is simply using WCF with a binary tcp protocol, for example as describer in " Easiest way to get fast RPC with .NET? ".
I'll be making a vast amount of calls and latency is a big issue. Basically one computer will be running a physics simulator and the others will be interacting with it using a API of several hundred commands.
I'm thinking the best way is to make a custom binary protocol where API commands are identified by int16 and a sequence number, and followed by the required parameters. Hardwiring the send and receive classes would remove any unnecessary overhead.
But that is a LOT of work since we are talking several hundred API commands.
Any thoughts on the best way to do implement it?
Edit: To clarify: AFAIK serialization in .Net is not optimized. There is a relatively high penalty in serializing and deserializing objects in for example the internal use of Reflection. This is kind of what I want to avoid, and hence my though around directly mapping (hardwiring) methods.
After some searching I found one app I had a vague recollection of: http://www.protocol-builder.com/
The most fundamental difference between RPC and REST is that RPC was designed for actions, while REST is resource-centric. RPC executes procedures and commands with ease. Alternatively, REST is ideal for domain modeling and handling large quantities of data.
Remote Procedure Call is a software communication protocol that one program can use to request a service from a program located in another computer on a network without having to understand the network's details. RPC is used to call other processes on the remote systems like a local system.
The RPC over HTTP protocol uses two long-lived HTTP connections: one for request data and another for response data. The protocol can tunnel multiple requests and responses in a single HTTP request. What is IIS (Internet Information Services) and How Does It Work?
Reducing the total traffic (which would be the main benefit to a custom protocol vs. using WCF) is not going to have a large effect on latency. The main issue would be keeping the amount of data for the "required parameters" to a minimum, so each request is relatively small. The default serialization in WCF is fairly efficient already, especially when using TCP on a local network.
In a scenario like you're describing, with many clients connecting to a centralized server, it is unlikely that the transport of the messages itself will be the bottleneck - processing the specific messages will likely be the bottleneck, and the transport mechanism won't matter as much.
Personally, I would just use WCF, and not bother with trying to build a custom protocol. If, once you have it running, you find that you have a problem, you can easily abstract out the transport to a custom protocol at that time, and map it to the same interfaces. This will require very little extra code vs. just doing a custom protocol stack up front, and likely keep the code much cleaner (since the custom protocol will be isolated into a single mapping to the "clean" API). If, however, you find the transport is not a bottleneck, you will have saved yourself a huge amount of labor and effort.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With