Advanced Scenarios in Azure Functions

Durable Function Patterns

Your browser needs to be JavaScript capable to view this video

Try reloading this page, or reviewing your browser settings

This video segment discusses how to address and implement various problems and design patterns with Durable functions.


  • Azure
  • Azure Functions
  • Durable Functions

About this video

Sahil Malik
First online
20 December 2019
Online ISBN
Copyright information
© Sahil Malik 2019

Video Transcript

Sahil Malik: Let’s start by understanding some various Durable Function patterns or code patterns or architectural patterns that you can implement easily using Durable Functions, starting with function chaining. So basically function chaining is where you want to have a sequence of functions executed in a particular order. And frequently the output of one function is important to the input of the next function or any combination thereof. So as you can see over here, the values F1, F2, F3, F4 are the names of other functions in the function app. And control flow can be implemented using simple coding constructs that you’re used to in normal C#. So code executes top down. There is no weird nesting going on, promises awaits, anything like that. So basic conditionals, loops, et cetera, are easy to implement here. So as things get complicated, your code remains simple.

And you can use try/catch, so a function throws an error, you can use standard C# constructs to catch that exception. The important thing is that the ctx parameter of type durable orchestration context, that provides the necessary methods for invoking other functions, passing parameters, returning function output and so on and so forth. So each time the code calls await, the Durable Functions framework checkpoints the progress of the current function instance. And if the process or V.M. recycles through the execution, the function instance will simply resume from the previous await call.

So the next code pattern that we should know about is fanning in and fanning out. So it refers to the pattern of executing multiple functions in parallel and then waiting for all of them to finish. So generally speaking, this is when some sort of aggregation work needs to be done on the results returned from each one of these functions. So with normal functions, you can do fanning out by sending multiple messages to a queue and then from the queue, different functions will pick up the message and sort of execute. But when you want to fan back in, you have to wait for other functions to finish. So that becomes a little challenging.

How would you do that? You’d have to probably poll or track when these queue-triggered functions end and they store their output. Not easy, right? I mean it’s doable but Durable Functions extension just handles this pattern very easily with some simple code as you can see over here. So I create a variable called parallel Tasks and I say, CallActivityAsync, parallel Task add and then I just say await Task.WhenAll. So Task parallel library, Task.WhenAll, we’re quite used to that pattern and we can use that with Durable Functions.

Another patterns we should be familiar with is Async HTTP APIs where sometimes you have long-running operations with external clients. So HTTP could be one example but there could be other examples as well. So you have a long-running operation with external clients and they are generally triggered by an HTTP call, so the idea being that the client calls an HTTP endpoint and then the client has to continually poll an HTTP endpoint for status updates. Again, this can be simplified by Durable Functions. So in the code example you see here, again the durable orchestration client starter parameter value, it’s a value from the orchestration client output binding, which is a part of the Durable Functions extension. And this gives you all the methods necessary for starting, sending events, to terminating, and querying for new or existing orchestrator function instances.

So in this example you see here, the HTTP-triggered function takes in a function name value from the incoming URL and it passes that to start new async. As a result, the binding API then returns a response that contains a location header and additional information about the instance that can later be used to look up the status of the starter instance or terminate it.

And the opposite of HTTP APIs is monitoring where things are reversed. So you’re polling until certain conditions are met. And this is also possible with Durable Functions. So again as you see in this code here that I can set up an arbitrary number of monitors until a condition is met. And I can even control the duration at which I’m polling using the pollingInterval variable. As well as you can see, I have a line there saying if jobStatus is equal to completed, so the idea being that whenever a condition is met, I simply exit this loop with a break statement. Again, this really simplifies my code.

Durable Functions can also be used to integrate with external parties or systems like human intervention or perhaps another system that needs to give us some input before we can proceed. So as you can see here, we need to wait for an external event. So you have a function called ctx.waitForExternalEvent, right? So that allows the orchestrator function to asynchronously wait and listen for an external event. The listening orchestrator function declares the name of the event and the shape of the data it expects to receive. So basically it allows you to represent any long-running task in an orchestration a lot easier.

And there could be any number of patterns. I mean really imagination is the limit. I think the key takeaway here is orchestration. So it is important to realize that Durable Functions merely make it easy for you to represent long-running orchestration in Azure Functions. Could you have implemented all this in plain vanilla functions? Absolutely, but then you would have to provision a storage or some sort of complicated logic and it becomes hard to debug, diagnose, et cetera, not to mention when you have something running, polling all the time, you’re paying for it. You’re paying for every poll. Whereas in this case, not only does it cost you less to implement it, it will also cost you less to run them.