Forcing an Azure Function to process Queue Storage messages one at a time

Forcing an Azure Function to process Queue Storage messages one at a time

Last updated on September 3, 2023

By default, Azure Functions process Azure Storage queue messages in parallel. This is normally great as it tremendously decreases the time required for processing all of the items. However, some occasions might require you to process the messages one by one.

As I needed to do this for a customer case which I’ll explain later, I discovered that it was not as straight forward as I had thought. Or rather, no one internet resource would detail all the steps I was required to take. Hence, I decided to write this blog post that covers everything, and hopefully, you will find it useful if you are attempting to achieve 1-by-1 queue processing in your functions.

Note that even though the queue items will be processed one by one after performing the configurations, there is no guarantee that they’ll be processed in the order they are added to the queue. If you also need to ensure the queue items are processed in the first-in-first-out (FIFO) pattern, you should look into Azure Service Bus queues instead of Azure Storage queues.

A bit of background

I have a customer who has automation in place for inviting guest users in large batches. After the invitation, the guests are added to security groups which grant them access to specific resources. Due to “reasons”, these security groups are mail-enabled security groups which causes one problem: we can’t add the users to the groups via Microsoft Graph because the Add member operation only supports Office 365 groups and regular Azure AD security groups.

Luckily, we can connect to Exchange Online in PowerShell and add the users to mail-enabled security groups with the Add-DistributionGroupMember commandlet. So, what I did was the following:

  1. Created a new Azure Storage queue.
  2. Made the primary guest invitation logic send the user ID and the group ID to the queue to wait to be processed.
  3. Created a new PowerShell Azure Function with a queue trigger bound to the created queue.

The Exchange Online concurrency problem

When the PowerShell script in my Azure Function processed one message at a time, it worked perfectly. However, by default, Azure Functions process queue messages in parallel, and that’s when we started to run into problems.

The Exchange Online sessions have a concurrency limit. You can have a maximum of 10 connections open at a time. In practice, this means that the script works fine only if the queue receives and the Azure function processes a maximum of 10 messages at any one time. However, I did mention large batches, right? And by large batches, I mean there can be hundreds of messages hitting the queue in a very short period of time. When that happens, the Azure Function attempts to process a much larger number of them at once, which leads us to get a bunch of these exceptions:

Fail to create a runspace because you have exceeded the maximum number of connections allowed : 10 for the policy party : MaxConcurrency. Please close existing runspace and try again.

When that happens many times, the messages end up in the poison queue and will no longer be retried. By the way, there is a great tool called Azure Storage Explorer that allows you to move large amounts of queue messages from the poison queue to the processing queue with a single click of a button. I highly recommend it, and I have no idea why the functionality is not available in Azure Portal.

Parallelism is great when you need a bunch of things processed quickly. However, in this case, I’m more interested in reliability than speed. So, how can we force the Azure Function to process the queue messages one at a time?

Forcing the Azure Function to process a single queue message at a time

We need to tweak a couple of things to make our Azure Function process queue messages one at a time. You can do both of these configurations via Azure Portal.

Note that these configurations affect all the functions within the Function App. If you have more than one function in the app and you need different behaviour for each one of them, you need to split them into separate Azure Function Apps.

host.json

In your Azure Function, go to App files, and add the below queues configuration to the host.json file (note that you might also have other settings in the file, so adjust your changes accordingly):

Setting the batchSize setting to 1 makes the function fetch only one message from the queue at a time, and
newBatchThreshold defines our function to check the queue for new messages only when the number of processed messages drops down to this value (0 = no messages are being processed any longer). Together, these settings ensure that a single Azure Function instance only processes one queue message at a time.

At other times, maxPollingInterval is the maximum amount of time we allow our function to “procrastinate” before it starts processing new queue messages. As I mentioned before, I’m not in a rush here, so I can allow the function to mentally prepare for 30 seconds. It can start processing the messages sooner, too, though, if it feels like it.

The last two settings are related to errors and reprocessing. visibilityTimeout essentially defines the delay between retries, and maxDequeueCount sets the maximum number of times the function tries to process the queue message before moving it to the poison queue (including the first run). As I mentioned before, if that happens, you can move the messages from the poison queue to the processing queue by using the Azure Storage Explorer desktop application.

Application settings

Then, still in the Azure Function, go to Configuration, and add the following application settings:

Name Value
WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT 1
FUNCTIONS_WORKER_PROCESS_COUNT 1

The WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT ensures that there can only ever be one Azure Function instance up, and FUNCTIONS_WORKER_PROCESS_COUNT limits the instance so that it can only execute one worker process at a time.

Ensuring the same behavior in Durable Functions

Normally, Azure Functions running on the Consumption plan are limited to 10 minutes of runtime. When you estimate you might be running into that cap, you start looking into making your function durable.

Azure Durable Functions have a fan-out-fan-in pattern which is normally used for processing items in parallel. To execute a certain logic for an N number of items, you most likely want to utilize this pattern.

But what if you don’t want the items to be processed in parallel but rather one-by-one? You might want to consider doing this if you are, e.g., calling Microsoft Graph in your activity function and experiencing a lot of throttling due to all the different threads bombarding the same endpoint at once.

Behind the scenes, Durable Functions also utilize Azure Storage queues in their processing, and hence the activity functions behave in a very similar manner to regular queue triggered functions. However, configuring them to process the queue items one by one happens slightly differently.

First, you need to configure the application settings the same way as above. Then, to ensure there can only ever be one activity function (and one orchestrator) in execution at any one time, you add the following to the function host.json. Again, remember that these configurations affect all the functions in the Function App. If you need to have different behaviour for different functions, you need to split them into separate Function Apps.

Afterword

That’s it! Ensuring queue items are processed one by one is quite a simple thing to do once you know how to do it. Heh, isn’t that always the case? 🙂

If you have any questions or comments regarding this topic, feel free to write them down below. And if enjoyed reading this article or found it useful, you might want to consider the following:

  • Share this article with your friends!
  • Add my blog to your RSS feed, e.g., on Feedly
  • Sign up for my Insider (news)letter. I write a recap of things that are going on semi-regularly, typically once per month.
  • Follow me on your favorite social media platform(s). I post a bit different content on each one of them.
    • Twitter (most active, tech related, occasional fun tidbits)
    • LinkedIn (primarily for the Finnish audience)
    • Github (code! when I have time…)
    • YouTube (still a bit work in progress, but feel free to sub if you want to get immediately notified when I do post videos)

Other than that, I wish you happy times in your Azure development adventures, and I’ll write something interesting for you again soon enough! Ciao!

Laura



18 thoughts on “Forcing an Azure Function to process Queue Storage messages one at a time”

  • Hello

    Thanks for this it’s very helpful

    A couple of questions

    How can this approach be used in a partitioned queue?

    If I have 2 customers I want to partition my queue using customer id and only allow 1 message per customer not just one message overall

    Also how can we apply these settings just to a specific function and not the entire function app?

    Paul

    • Hi Paul,

      1. I have not played with the kind of setup you describe and hence am unable to answer your question at this time.
      2. It is not possible to apply these settings to a single function instead of the entire app. I recommend you separate your functions between two different Function App resources: one that uses these settings and one that processes messages in parallel.

      Laura

  • Hi Laura,
    Thank you for your post.
    Would you know any method to go deeper and configure a specific queue and not all of them?

  • Hi Laura,

    Thank you for a detailed and well explained steps to configure this specific behavior.
    It really not easy to get everything right especially with the durable functions.
    It helped me a lot!!

    Luka

  • Great post (and blog all around!).

    Does this method allow scale-out based on memory threshold rules, where **each instance** of a Function processes a single queue message at once?

    Example:
    – Queue message triggers function, memory hits 3.5GB threshold and Function App Scales-out to 3 instances
    – Will each instance grab a single queue message to process?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.