I'm doing some performance tuning and capacity planning for a low-latency application and have the following question:
What is the theoretical minimum round-trip time for a packet sent between a host in London and one in New York connected via optical fiber?
According to the FCC's tenth Measuring Fixed Broadband Report, typical latency differs based on the connection type: Fiber: Typical latency would be 10 to 12 ms. DSL: Typical latency would be 11 to 40 ms. Cable: Typical latency would be 13 to 27 ms.
Round Trip Time (RTT) is the length time it takes for a data packet to be sent to a destination plus the time it takes for an acknowledgment of that packet to be received back at the origin. The RTT between a network and server can be determined by using the ping command.
Since the path is usually not direct between any two points, cross-country latency can usually sit around 100-120 ms as opposed to 22 ms.
In modern networks, the primary source of latency is distance. This factor is also called propagation delay. The speed of light in a fiber is roughly 200,000 km per second, which gives us 5 ms per 1000 km single-direction and the mnemonic rule of 1 ms of round-trip time per 100 km.
I believe the index of refraction of fiber is around 1.5, and the internet reports it's around 5600 km from NY to London, so the theoretical minimum one-way is 5600 km / (c/1.5) =~ 28 ms
. Round-trip is double that, 56 ms.
Up to you to do the real work of estimating latency through your routers and all.
P.S. The cables might not be straight :p
Edit: A bit of the wikipedia article on optical fiber pretty much contains all this information.
Just ask Hibernia, they currently are at 72ms and presently looking at 60ms by mid-2012.
http://www.a-teamgroup.com/article/andrews-blog-laying-cable-and-the-low-latency-gauntlet/
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With