Writing to Azure EventHub using EventHubBufferedProducerClient

The Azure Event Hub is a big data queue system capable of handling millions of events per second and packs nifty tricks in both the read and write end. I have written plenty of articles on how to read and write from the Azure Queue, and the Azure Queue is often the better choice for your basic queue needs, but this time we are rolling out the big guns.


  • The Event Hub likes to receive data in batches. This lowers the connections created for each write. The EventDataBatch class was invented for this.
  • Partitioning is a thing, especially for the read end. Storing your data with the right partition data can increase performance considerably.

Now, the EventHubProducerClient is the basic client for writing to the Event Hub, and it will handle writing in batches. But each batch can only write to one partition.

To overcome the partitioning problem, Microsoft have the EventHubBufferedProducerClient Class. It’s currently only available in the Azure.Messaging.EventHubs v5.7.0-beta.2 package, but I assume it will survive the beta stage.

The difference between EventHubProducerClient  and EventHubBufferedProducerClient is that in the latter you just add an event to an internal queue, and the class will do the writing in the background at intervals. This allows you to batch-drop events to the Event Queue, and the client will then handle the partitioning and the writing for you.

But enough talk, let’s code:


You need to allow for beta packages, as you (currently) need this package:


using System.Threading.Tasks;
using Azure.Messaging.EventHubs;
using Azure.Messaging.EventHubs.Producer;

namespace MyCode
  public class EventHubRepository
    private static EventHubBufferedProducerClient _producerClient;

    public EventHubRepository()
      _producerClient = new EventHubBufferedProducerClient("connectionstring", "event hub name");
      _producerClient.SendEventBatchFailedAsync += args =>
        // Do something if an error happens
        return Task.CompletedTask;
      _producerClient.SendEventBatchSucceededAsync += args =>
        // Do something if all goes right. You can 
        // log the number of writes per partition for example:
        // Console.Log(args.PartitionId);
        // Console.Log(args.EventBatch.Count);
        return Task.CompletedTask;

    public async Task AddMessage(string partitionKey, string message)
      var enqueueOptions = new EnqueueEventOptions { PartitionKey = partitionKey };
      await _producerClient.EnqueueEventAsync(new EventData(message), enqueueOptions);

Nothice how there is no implicit write method? The class will flush the events to the Event Hub when it’s time to do so. If you have 8 partitions, the class will do 8 batch writes. You can control the time using the EventHubBufferedProducerClientOptions class.


The partition key keeps related events together in the same partition and in the exact order in which they arrive. The partition key is some string that defines a relation between your data. Let’s say you are storing page views per sessionid. The order of views per session is important, so you use the sessionid as partition key to ensure that all page views from the same session is stored in the same partition. This ensures their sort order, and makes it faster to process page views for one session in the other end.

That’s it. You are now a Azure Event Hub expert.


About briancaos

Developer at Pentia A/S since 2003. Have developed Web Applications using Sitecore Since Sitecore 4.1.
This entry was posted in .net, .NET Core, c#, General .NET, Microsoft Azure and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.