I am looking for a good way to transfer non-trivial (10G > x >10MB) amounts of data from one machine to another, potentially over multiple sessions.
I have looked briefly at
Are there any other protocols out there that might fit the bill a little better? Most of the above are not very fault tolerant in and of themselves, but rather rely on client/server apps to pick up the slack. At this stage I care much more about the protocol itself, rather than a particular client/server implementation that works well.
(And yea I know I can write my own over udp, but I'd prefer almost anything else!!)
I use rsync (over SSH) to transfer anything that I think might take more than a minute.
It's easy to rate-limit, suspend/resume and get progress reports. You can automate it with SSH keys. It's (usually) already installed (on *nix boxes, anyway).
Depending on what you need, rsync can probably adapt. If you're distributing to a lot of users, FTP/HTTP might be better for firewall concerns; but rsync is great for one-to-one or one-to-a-few transfers.
rsync is almost always the best bet.
since it transfers only differences, if the transfer is interrupted, the next time it won't be so different as the first one (when there wasn't a file at destination)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With