After starting with the database, we moved on to how to expose this data and add our business logic. Our traditional approach would have been to use a Web API but Azure Functions offered a way to completely remove any infrastructure management and scale instantly. It also offered the potential to use micro billing and only pay for calls being made although at a performance cost – more on that later.
Whilst this blog series is focussed on an application developed for one of our clients, it became apparent that this series would be a lot easier to discuss if we could share that application. However, due to the nature of the application, we did not want to directly share the source code so we have taken the core elements being discussed and ported those to a sample which we have open-sourced. You can view this at https://github.com/BallardChalmers/BCServerlessDemo.DataAndFunctions.
What is Azure Functions?
Azure Functions is a serverless product from Microsoft that allows developers to execute code when triggered by an event. The benefit to this is the very low cost for each call, often reducing the cost for running infrastructure that is not used constantly. It allows the owner to pay for the service only when it is used. For developers, it reduces the complexity of deploying code to be used and allows immediate scaling without additional infrastructure considerations.
Functions are not restricted to Microsoft languages and you can write functions in many different languages. At the time of writing, the core languages are C#, JavaScript and F# but you can use the experimental languages under production support for Batch, PowerShell, Python and TypeScript and also with Java available and functions hosted in Linux too.
If your function is triggered by another Azure service, there are plenty of pre-defined templates that trigger actions based on events from those services, such as when a blob is added to a specific container or when a document is changed in Cosmos DB. This helps reduce the effort for setting up integrating functions for developers.
How do you develop with Azure Functions?
Writing code
Code can either be written directly into the portal, or can be deployed through Continuous Integration processes or manually published. Writing directly into the portal opens up the functions to use scripts for automating processes that are currently triggered manually, especially governance style processes like removing under-used services. For larger elements of code, such as that used in the Web API for our application, the functions can be developed in an IDE and published. Visual Studio and Visual Studio Code both have extensions for creating new functions and easily publishing them to Azure. They can also be run in a local emulator for testing without any cost and to make debugging easier.
Deploying
Our preferred method to bring it in line with our other development processes is to utilise the local emulator for developing the changes and then checking the code in using Azure DevOps (formerly VSTS). This will then build and test the code on each check in using Azure Pipelines and running the code on a hosted instance. The code is then deployed to Azure using the release process in Azure Pipelines, utilising the Azure service task – more details on this will be available in part 6 of this blog series.
Dependency Injection with Azure Functions
What much of the standard documentation does not discuss is that most developers will want unit tests in their code and will make use of interfaces to allow mocking of services. The default set-up of functions can make this hard so to workaround this, we make created an Inject Attribute, shown below from https://github.com/BallardChalmers/BCServerlessDemo.DataAndFunctions/blob/master/BCServerless.DataAndFunctions.Functions/HttpTriggers/Journeys.cs.
To help explain this better, I will take the example of setting up the sample data. There is a SampleData function which receives a HttpRequestMessage containing body content and query strings. The attribute on the function binds the associated interface with the container registrar using Simple Injector. Each function has an associated API (such as the Sample Data API) held in the Functions project with a method defined for each HttpMethod (i.e. GET, POST, PUT etc) and the API will also extract any required properties from either the body or the query string, such as the ID of the Journey to retrieve or updating a journey with a post.
The API will then call the appropriate service that it requires such as the call to the User Service in the Core Project on line 132 of the Sample Data API that ensures that all users from Azure B2C are created in the database. The separation of the services in to the Core project allows the services to be tested without the need to invoke much of the function’s wrappers. There is one object that we do pass on to the service that is more function specific and that is the HTTPRequestMessage. The reason for this is to add role-based permissions around the function calls e.g. so that admins can only access certain features. To be completely honest, this is an area we would like to improve on as it is currently only used for local development, but this will be covered below in one of our challenges.
Testing
Once the interfaces are defined, we have clean separation in place, so the core logic of our data layer can be tested with unit tests and we can also write integration tests to ensure that data is correctly written in to Cosmos DB.
As you may already have seen from our code, there are not that many unit tests. This is due to the fact that most of the logic is held in the web tier and the services primarily pull and push data from Cosmos DB using the DBRepository pattern noted in part 2 of this blog series. There are a couple of areas around the search query builders where we have added some tests to ensure that the filters are correctly applied and these show how the tests run against the service. We also have a test to validate the Container Registrar as we often hit issues where we forgot to register one of the interface instances.
The bigger area that we have found useful is the test that sets up the sample data. This ensures that the broad sweep of objects are created correctly in the database and can be used. This also demonstrates the use of sample data as part of our full end-to-end UI testing and neatly demonstrates how the values can be created if they do not exist so that the sample date creation can be run multiple times if required.
How is Azure Functions priced?
Functions can be hosted in two ways – consumption plan or app service plan.
Consumption plan
These are priced based on three main criteria:
- Number of executions – number of times a function is called
- Execution time – calculation based on the memory used and the time taken for a function in gigabyte seconds (GB-s)
- Storage used – used for maintaining information about the functions and could also be used for logs if set up
The pricing is set to be very low and, at the time of writing, you will have a free grant of 400,000 GB-s and 1 million executions per month. However, for a busy application, the very low sounding prices can soon start to disappear. The benefit really comes in being able to see a direct financial benefit to writing efficient code that runs quickly and uses as little memory as possible.
App service plan
Functions can also be run under an app service where they can be set to be always on. However, you then pay for the app service and will be charged for the whole time that the app service exists.
Which should you use?
This will depend on your scenario. If you are using the functions for a Web API in a web application that is constantly being used, the performance benefits of the app service plan being always on will be a benefit. If you are writing a function that executes daily then the consumption plan will be far more cost efficient. If it is a longer running job, you may want to consider Durable Functions.
Challenges
Below are some of the challenges we faced when getting started with functions. This post already covered how we implemented dependency injection above but that was certainly the first challenge, along with how to structure the functions.
Initial hit performance for Consumption plan functions
Our initial aim with the customer application was to make use of the Consumption plan and accept the hit in performance if the application was not used much during the day. We would make use of a scheduled function to wake up each of the functions in the morning and possibly at different periods during the day. However, during development and testing, we found this inconsistent and slow for users with the functions too often not actually waking up for long enough.
Instead, we made the decision to use an App Service plan, largely because we were already using one to host the web tier of the application and so we could host both apps on the same plan. We would like at some point to do further investigation in to the consumption plan and compare the two costs further for our particular application. Chris O’Brien has written a great blog post on keeping functions alive to improve performance but you should always try and evaluate the costs of running these functions regularly versus the app service plan.
Statics in Cosmos and Functions
From the initial set-up of our DBRepository pattern in Cosmos, we hit a problem when running in Azure with the function running out of sockets. If we ran the full sample data upload, the function would fail halfway through with the message “Only one usage of each socket address”. The issue was found to be that we were initialising the connection to Cosmos DB for each of the instances of a data object and therefore the socket limit of 10 would be reached. This only caused a noticeable issue when connecting to a large number of different types of object as we did when creating the sample data but we also found that this created a significant performance improvement by removing this. The static object is now only created if it does not already exist, reducing the amount of memory used and reducing the risk of running out of the sockets. The use of Dependency Injection also helped to remove this issue.
Role based permissions with local development
The development process of using the local emulator for Azure Functions has worked very well for the most part. We expected to hit issues of functions performing differently when run locally to those in Azure but did not face this at all. The only issue that we did hit is that it is not currently possible to enable authentication on your local emulator in the same way that you can with the Azure hosted version.
We were aiming to ensure that each function could only be called with a valid Azure B2C token and that authorised users would also hold their role within the claims passed by Azure B2C. This meant that developing locally would not work at all because it would not consider the user authenticated.
On investigating, we found that the response message would include “LOCAL AUTHORITY” in the Claims passed to the function when running in the emulator so we built in logic to check for the current user from a passed userID header. This allowed us to log in as different users locally with Azure B2C and pass different UserIDs that were checked in the database.
It is not ideal to have code that is purely used when in development, but we have kept this to a single area and reduced the time taken for developing as different users significantly. It would be great if this were to change, but at the time of writing we are not aware of any plan for this to change.
Deploying App Settings
When deploying the functions to Azure using Azure DevOps, the App Settings are not set by the release process. Initially, this meant that we had to manually ensure that these settings were entered via the Azure Portal itself. This was later changed to be included in the ARM deployment that we will discuss further in part 7 of this series.
Caching
One thing that Functions do not inherently have is caching. You can hold statics in memory and they will exist for the duration of the function timeout (five minutes) but it will then need to be retrieved. It is also possible to implement your own cache in Blob Storage or another service, but the performance will not be as good as you would find in an ASP.Net Web API.
REST API formatting
There is a lot of messy code needed to pull the parameters from Azure Functions when compared with Web API. If you are looking for a clean set of code that pulls parameters through convention rather than hard setting values to look for, Azure Functions is not the place to go at the moment. It would be good to see if there are any projects that are looking to address this, but it is not something we have yet found.
Summary
If you have reached this far, either through reading the above or scrolling to the bottom, you are probably wondering if we would use Azure Functions for a Serverless Web API again. The answer is a definite yes. There are considerations that you have to look at but the cost benefit for APIs that are not constantly used is massive. It delivers many integrations to other Azure Services with little or no extra work needed and allows for scheduling and triggers to be set up in one place. There are limitations as I have noted above and it does not yet have the projects around it that make it easier for developers to create APIs that you would get with an ASP.Net Web API or similar. However, when you can change your code to reduce memory used and immediately see the cost reduction to your live environment, it is a hugely enticing option.
By Kevin McDonnell, Senior Technical Architect at Ballard Chalmers
UPDATE: The next in the series is available here: Modern Serverless Development Part 4 – Web Application Using Angular 5
About the author
Kevin McDonnell is a respected Senior Technical Architect at Ballard Chalmers. With a Master of Engineering (MEng), Engineering Science degree from the University of Oxford he specialises in .NET & Azure development and has a broad understanding of the wider Microsoft stack. He listens to what clients are looking to achieve and helps identify the best platform and solution to deliver on that. Kevin regularly blogs on Digital Workplace topics and is a regular contributor to the monthly #CollabTalk discussions on Twitter.