When the code is waiting for some condition in which delay time is not deterministic, it looks like many people choose to use exponential backoff, i.e. wait N seconds, check if the condition satisfies; if not, wait for 2N seconds, check the condition, etc. What is the benefit of this over checking in a constant/linearly increasing time span?
Exponential backoff is commonly utilised as part of rate limiting mechanisms in computer systems such as web services, to help enforce fair distribution of access to resources and prevent network congestion.
The idea behind using exponential backoff with retry is that instead of retrying after waiting for a fixed amount of time, we increase the waiting time between reties after each retry failure. For example, when the request fails the first time, we retry after one second.
exponential backoff or truncated binary exponential. backoff refers to an algorithm used to space out. repeated retransmissions of the same block of data, often as part of network congestion avoidance. Examples are the retransmission of frames in carrier.
Back-off algorithm is a collision resolution mechanism which is commonly used to schedule retransmissions after collisions in Ethernet. The waiting time that a station waits before attempting retransmission of the frame is called as back off time.
Exponential back-off is useful in cases where simultaneous attempts to do something will interfere with each other such that none succeed. In such cases, having devices randomly attempt an operation in a window which is too small will result in most attempts failing and having to be retried. Only once the window has grown large enough will attempts have any significant likelihood of success.
If one knew in advance that 16 devices would be wanting to communicate, one could select the size of window that would be optimal for that level of loading. In practice, though, the number of competing devices is generally unknown. The advantage of an exponential back-off where the window size doubles on each retry is that regardless of the number of competing entities:
The window size where most operations succeed will generally be within a factor of two of the smallest window size where most operations would succeed,
Most of the operations which fail at that window size will succeed on the next attempt (since most of the earlier operations will have succeeded, that will leave less than half of them competing for a window which is twice as big), and
If, instead of doubling each time, the window were simply increased by a constant amount, then the time spent retrying an operation until the window reached a usable size would be proportional to the square of whatever window size was required. While the final window size might be smaller than would have been used with exponential back-off, the total cost of all the attempts would be much greater.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With