I'm having a hard time understanding the RetryExponential class that is used in conjunction with QueueClients (and I assume SubscriptionClients as well).
The properties are listed here, but I don't think my interpretation of their descriptions is correct.
Here's my interpretation...
var minBackoff = TimeSpan.FromMinutes(5); // wait 5 minutes for the first attempt?
var maxBackoff = TimeSpan.FromMinutes(15); // all attempts must be done within 15 mins?
var deltaBackoff = TimeSpan.FromSeconds(30); // the time between each attempt?
var terminationTimeBuffer = TimeSpan.FromSeconds(90); // the length of time each attempt is permitted to take?
var retryPolicy = new RetryExponential(minBackoff, maxBackoff, deltaBackoff, terminationTimeBuffer, 10);
My worker role has only attempted to processing a message off the queue twice in the past hour even though I think based on the configuration above it should go off more frequently (every 30 seconds + whatever processing time was used during the previous attempted up to 90 seconds). I assume that these settings would force a retry every 2 mins. However, I don't see how this interpretation is Exponential at all.
Are my interpretations for each property (in comments above) correct? If not (and I assume they're not correct), what do each of the properties mean?
As you suspected, the values you included do not make sense for the meaning of these parameters. Here is my understanding of the parameters:
It always will retry up to the maxRetryCount, in your case 10, unless the terminationTimeBuffer limit is hit first.
It will also not try for a period greater than the terminationTimeBuffer, which in your case is 90 seconds, regardless of it hasn't hit the max retry count yet.
The minBackoff is the minimal amount of time you will wait between retries and maxBackoff is the maximum amount of time you want to wait between retries.
The DeltaBackOff value is the value at which each retry internal will grow by exponentially . Note that this isn't an exact time. It randomly chooses a time that is a little less or a little more than this time so that multiple threads all retrying aren't doing so at the exact same time. Its randomness staggers this a little. Between the first actual attempt and the first retry there will be the minBackOff interval only. Since you set your deltaBackOff to 30 seconds, if it made it to a second retry it would be roughly 30 seconds plus the minBackOff. The third retry would be 90 seconds plus the minBackOff, and so on for each retry until it hits the maximum backoff.
One thing I would make sure to point out is that this is a retry policy, meaning if an operation receives an exception it will follow this policy to attempt it again. If operations such as Retrieve, Deadletter, Defer, etc. fail then this retry policy is what will kick in. These are operations against service bus, not exceptions in your own processing.
I could be wrong on this, but my understanding is that this isn't directly tied to the actual receipt of a message for processing unless the call to receive fails. Continuous processing is handled through the Receive method and your own code loop, or through using the OnMessage action (which behind the scenes also uses the Receive). As long as there isn't an error actually attempting to receive then this retry policy doesn't get applied. The interval used between calls to receive is set by either your own use of the Receive method which takes a timespan, or if you set the MessagingFactory.OperationTimeout before creating the queueClient object. If a receive call reaches it's limit of waiting either because you used the overload that provides a timespan on Receive or it hits the default, a Null is simply returned. This is not considered an exception and therefore the retry policy won't kick in.
Sadly, I think you have to code your own exponential back off for actual processing. There are tons of examples out there though.
And yes, you can set this retry policy on both QueueClient and SubscriptionClient.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With