Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Azure service bus read performance

I am trying to improve throughput on a windows service using Azure Service Bus. What I have noticed is that if I have code like like this.

 client.OnMessageAsync(async message =>
            {
                var timer = new Stopwatch();
                timer.Start();
                bool shouldAbandon = false;
                try
                {
                    // asynchronouse processing of messages
                    await messageProcessor.ProcessAsync(message);
                    Interlocked.Increment(ref SimpleCounter);
                    // complete if successful processing
                    await message.CompleteAsync();
                }
                catch (Exception ex)
                {
                    shouldAbandon = true;
                    Console.WriteLine(ex);
                }

                if (shouldAbandon)
                {
                    await message.AbandonAsync();
                }
                timer.Stop();
                messageTimes.Add(timer.ElapsedMilliseconds);
            },
           options);

Where options is

OnMessageOptions options = new OnMessageOptions
            {
                MaxConcurrentCalls = maxConcurrent,
                AutoComplete = false
            };

Increasing MaxConcurrentCalls has little effect after a certain number (12-16 usually for what I am doing).

But creating multiple clients (QueueClient) with the same MaxConcurrentCalls does increase performance (Almost linearly).

So what I have been doing is making #queueclient and maxconcurrentcalls configurable but I am wondering if having multiple queueclients is the best approach.

So my question is: Is having multiple queueclients with messagepumps running a bad or good practice for a windows service and azure service bus?

like image 679
Josh Avatar asked Nov 09 '22 12:11

Josh


1 Answers

I know this is really old now - but I thought I'd contribute my own findings.

I'm seeing an increase in queue processing performance just by running multiple processes on the same machine. In my case, I'm using console apps, but the principle is identical.

I think ultimately it's because the MaxConcurrency value is ultimately controlling how many messages the Service Bus will hand to a consuming client - when it reaches that limit, it effectively goes to sleep for a period of time (around 1 sec in my experience) before trying to push more messages down.

So if you have a very simple message handler, you're incredibly unlikely to reach capacity even if you set MaxConcurrency to 2x/3x/4x the logical core count, but processing of multiple messages will still be very slow if you push more messages than the client is configured to handle at once. Running another process with the same MaxConcurrency is giving you twice the available capacity - even when on the same machine - but it's not actually giving any more power.

Ultimately, the right configuration is going to depend on the processor usage profile of your queue tasks. If they are long-running and tend to chew through processor cycles, then having too great a MaxConcurrency likely slow you down, rather than up - and scaling out to other machines really will be the only solution.

If, however, your queue tasks are 'sparse' and spend most of their time waiting, then you will be able to get away with a higher MaxConcurrency than there are logical cores in your processor - because they won't all be busy all the time.

like image 175
Andras Zoltan Avatar answered Nov 15 '22 00:11

Andras Zoltan