Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Azure Functions Event Hub trigger bindings

Just have a couple of questions regarding the usage of Azure Functions with an EventHub in an IoT scenario.

  • EventHub has partitions. Typically messages from a specific device go to the same partition. How are the instances of an Azure Function distributed across EventHub partitions? Is it based on the performance? In case one instance of an Azure Function manages to process events from all partitions then it is enough otherwise one might end up with one instance of an Azure Function per EventHub partition?
  • What about the read-offset? Does this binding somehow records where it stopped reading the event stream? I thought the functions are meant to be stateless and here we have some state.

Thanks

like image 655
Helikaon Avatar asked Dec 02 '22 13:12

Helikaon


1 Answers

Each instance of an Event Hub-Triggered Function is backed by only 1 EventProcessorHost(EPH) instance. Event Hub ensures that only 1 EPH can get a lease on a given partition.

Answer to Question 1: Let's elaborate on this with a contrived example. Suppose we begin with the following setup and assumptions for an EventHub:

  1. 10 partitions.
  2. 1000 events distributed evenly across all partitions => 100 messages in each partition.

When your Function is first enabled, there is only 1 instance of the Function. Let's call this Function instance Function_0. Function_0 will have 1 EPH that manages to get a lease on all 10 partitions. Let this EPH be called EPH_0, and it will start reading events from partitions 0-9. From this point forward, one of the following will happen:

  1. Only 1 Function instance is needed - Function_0 is able to process all 1000 before the Azure Functions' scaling logic kicks in. Hence, all 1000 messages are processed by Function_0.

  2. Add 1 more Function instance - Azure Functions' scaling logic determines that Function_0 seems sluggish, so a new instance Function_1 is created, resulting in EPH_1. Event Hub detects that a new EPH instance is trying read messages. Event Hub will start load balancing the partitions across the EPH instances, e.g., partitions 0-4 are assigned to EPH_0 and partitions 5-9 are assigned to EPH_1.

    If all Function execution succeed without errors, both EPH_0 and EPH_1 checkpoints successfully and all 1000 messages are processed. When check-pointing succeeds, all 1000 messages should never be retrieved again.

  3. Add N more function instances - Azure Functions' scaling logic determines that both Function_0 and Function_1 are still sluggish and will repeat workflow 2 again for Function_2...N, where N>9. Event Hub will load balance the partitions across Function_0...9 instances.

    Unique to Azure Functions' current scaling logic is the fact that N is >(number of partitions). This is done to ensure that there are always instances of EPH readily available to quickly get a lock on the partition(s). As a customer, you are only charged for the resources used when your Function instance executes, but you are not charged for this over-provisioning.

Answer to Question 2: EPH uses a check-pointing mechanism to mark the last known successfully read message. An EventHub-Triggered Function can be setup to process 1 message or a batch of messages at a time. The option you choose needs to consider the following:

1. Speed of message processing - Processing messages in batches instead of a single message at a time is one of the factors that will speed up the ability of your Azure Function workflow to keep up with the incoming messages in your Event Hub.

2. Tolerance for duplicates - If check-pointing fails due to errors in your Function code/(Updated Aug 24th, 2017) timeout/partition least lost, then the next EPH that gets a lease on that partition will start retrieving messages from the last known checkpoint. Event Hub guarantees at-least-once delivery but not at-most-once delivery. Azure Functions will not attempt to change that behavior. If not having duplicate messages is a priority, then you will need to mitigate it in your workflow. As such, when check-pointing fails, there are more duplicate messages to manage if your Function is processing messages at batch level.

like image 145
Ling Toh Avatar answered Dec 27 '22 10:12

Ling Toh