Cloud & Engineering

Aaron Newton

Part 3: Azure Functions on Kubernetes - have your cake and eat it too

Posted by Aaron Newton on 22 June 2021

Azure Functions, Cloud Strategy, Microservices, Serverless, kubernetes, serverless hosting

Introduction

If you’ve read the first two posts in this series, welcome back. So far in our story, our heroes – microservices and serverless functions – save us from a monolithic meltdown. Go have a read if you’d like to laugh, cringe, and learn from my adventures.

This final post covers a different project. We had proposed using serverless functions as backend workers. There were some hesitations; would using Azure Functions lock us into Azure? We were already using Kubernetes for other parts of the project, so I started to investigate the possibility of running Azure Functions under Kubernetes.

I have also written a tutorial on how to run Azure Functions under Kubernetes. This tutorial heavily references the Kubernetes Event-Driven Autoscaler (KEDA) project. One of the benefits of the Azure Native FaaS (Functions as a Service) environment is that the platform monitors and scales the function based on events and metrics. A function running under Kubernetes will also need a way to scale based on events and metrics, and this is something that KEDA can facilitate.

Delicious looking cupcakes. Often we want to have our cake and eat it too.

Kubernetes, the cloud, and orchestrators

Before we try to run a function on Kubernetes, we should understand the motivation for doing so. One selling point of the cloud is that, for a premium, you can outsource the effort and staffing cost required to keep your hosting environment online. However, the experience of managing and deploying apps to each cloud provider, Azure, Amazon Web Service, Google Cloud Platform is a little different, as each has slightly different conventions and quirks. I’d compare this to driving on the right-hand or left-hand side of the road in different countries. You can still drive the car, but if you’re not paying attention cars will be hurtling towards you at great speed.

Kubernetes helps to abstract away the hosting environment and its underlying quirks by replacing these with its own conventions and quirks. However, once your team becomes familiar with Kubernetes objects and commands deployments become homogenous, be they for Kubernetes in the cloud, in your own datacentre or running on a bunch of Raspberry Pis. You can also run a Kubernetes cluster on a development machine with relatively little effort.

Running Functions in Azure Kubernetes Service

One interesting capability in the Azure Functions framework is the ability to deploy function apps into Kubernetes. This involves creating a function which, when built, produces a Docker image. The Azure Functions CLI will generate the necessary scaffolding to produce a Docker image, and minimal Docker knowledge is required. This Docker image includes all the dependencies necessary to run the function app, so we can run it anywhere that a Docker container can run, including Kubernetes. There are detailed instructions on how to produce a Docker ready function here, or you can check out my tutorial on how to run Azure Functions under Kubernetes.

Some triggers and metrics - such as HTTP triggers - are supported by features already in Kubernetes, so they are not included in the KEDA project. This is because an HTTP triggered function is essentially a web API, and we can run it in Kubernetes with minimal customisation. However, cloud-specific metrics - such as the number of messages in an Azure Storage queue - don’t come out of the box in Kubernetes. KEDA can provide autoscaling based on Azure Storage blob count, Azure Log Analytics queries, MSSQL queries and other metrics. It also supports metrics from many non-Azure vendors. The function triggers that are currently supported by the KEDA project are listed on the KEDA homepage.

There are some great benefits to running functions in Kubernetes. Organisations that already use Kubernetes can migrate their functions to this hosting environment to standardize their operations, and potentially leverage underutilised Kubernetes hosting. As AKS has auto-scaling at both the application and node level, many of the scaling benefits of a PaaS or FaaS style hosting system are preserved. Functions hosted in a Kubernetes environment like AKS can be configured to avoid cold-starts by leveraging orchestration mechanisms including readiness and start-up probes.

More mature products that may have well understood operational needs can see their costs significantly reduced by using reserved instances in Azure. By agreeing to higher up-front costs over a longer-term contract, Azure will discount hosting costs by up to 70+%. This is true for many popular services in Azure, including IaaS virtual machines and Azure Kubernetes Service (AKS). Other vendors have similar features.

The agility to leverage FaaS in the native Azure platform is a great way to achieve “speed to value”. Once the usage patterns are better understood, it may be justified to migrate functions to an orchestrator like Kubernetes for operational streamlining and cost reduction. The trend towards containerisation and container orchestration in cloud offerings means that the lines between different offerings are becoming blurrier – with less lock-in and better interoperability. In short, you can have your cake and eat it too.

 

Some very happy looking balloons in front of a cloud. Kind of the way I'm happy in the cloud.

Managed Kubernetes offerings

While Kubernetes handles the abstraction of machines into a cohesive cluster, there is no wizardry to do away with machine maintenance. If you self-manage your Kubernetes cluster, this means maintaining operating systems and installing Kubernetes itself.

Teams who want to focus less on server maintenance could benefit from a managed Kubernetes service. Examples include Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE).

We used AKS for this project. One awesome AKS feature is cluster auto-scaling, which lets Kubernetes handle the auto-scaling of the deployed applications. While Kubernetes handles the auto-scaling at an application level, each application instance is consuming machines resources, and eventually more machines need to be added to the cluster. Cluster auto-scaling allows virtual machines to be added and removed based on demand. This is a huge win if your goal is to spend more time adding value to your product and less on maintenance.

Will hosting in Kubernetes completely decouple my functions from Azure?

At the time of writing, Azure Functions require an Azure Storage Account to operate.

This means that even when hosting functions in a Kubernetes environment, an Azure Storage account will need to be provisioned. It is worth noting that Azure Storage is relatively cheap compared to hosting, so this may be an acceptable caveat for many organisations. If compliance is a concern, please note that Microsoft publishes a list of compliance certifications and attestations here.

Conclusion

We have reached the end of our journey into microservices, Azure Functions, serverless hosting and Kubernetes. Each post built upon the last – we split up a monolith, leveraged cloud services to improve speed to value, and surveyed the possibility of running containerised functions in Kubernetes. I hope you feel energized to build something extraordinary!

 

If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!

 

Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.

 

Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: