top of page

Running RTF on AKS with AGIC as ingress controller

Running Kubernetes based workloads in a modern company is almost a must. Running different versions of Kubernetes with different setups can be a nightmare for the operations team. That’s one of the reasons why it was great news when MuleSoft started supporting BYOK8s scenarios. As usual there’s a dark side to good news, and the prepackaged and configured ingress controller was removed from the Runtime Fabric components, so it now has to be solved on the customer’s side.


Runtime Fabric can run on different PaaS Kubernetes solutions and customers can opt for using a number of different ingress controllers.


One of the scenarios is using AKS (Azure’s managed K8s) with Application Gateway Ingress Controller (AGIC). These provide a fully managed infrastructure with great configurability and potential for IaC deployments. A quickstart guide has been compiled below to start using these tool fast, then you can learn to walk and run on your own.

Without further ado let’s jump (or rather crawl – pun intended) into the example setup.


Prerequisites:


What you need to setup a similar solution in your own environment?

  • Azure Subscription with necessary service providers registered (e.g. Container Service Provider)

  • A Virtual Network and two Subnets pre-configured

  • An Azure Log Analytics deployed to be the target of the monitoring add-on (see later)

  • Cloud shell access (shell.azure.com) or any other shell with which you can manage Azure resources Note that managing AKS with kubectl via Cloud shell is not possible due to authentication library changes, so I suggest using an alternative shell.

  • Anypoint Control Plane access with permission to create and manage Runtime Fabrics

  • An app that you can deploy into the new environment to test the ingress config (this also needs a valid MuleSoft license)

If you have all of the above then you can even follow along the steps below.


Desired state:


A Runtime Fabric registered in Anypoint Control Plane, hosted in an AKS cluster with AGIC configured to manage incoming traffic to Mule applications deployed.


Implementation flow:

  1. Deploy an AKS cluster

  2. Setup AKS management

  3. Create Runtime Fabric in Anypoint Control Plane and deploy RTF components onto AKS

  4. Deploy an Azure Application Gateway

  5. Setup Ingress Contoller feature between AGIC and AKS

  6. Deploy sample application to RTF

  7. Configure ingress

  8. Test and validate

Step 1: Deploy an AKS cluster


In this sample scenario an existing subnet will be used to deploy the AKS nodes into. Some flags are also set too e.g. enable AAD integration, monitoring add-on, managed identity and so on, which are useful and worth considering to use.


Here is a sample script that you can modify, or simply fill out and run in your environment:

az aks create `

--name <name of the AKS cluster> ` --resource-group <name of the AKS cluster’s resource group> ` #must already exist

--dns-service-ip <valid IP address> ` #must be within service CIDR

--enable-aad `

--enable-addons monitoring `

--enable-azure-rbac `

--enable-managed-identity `

--kubernetes-version 1.24.9 ` #note that this cannot be higher than RTF supported version

--load-balancer-sku standard `

--location <region> ` #must be in a valid format, e.g. westeurope

--network-plugin kubenet ` #this is advised to be Azure CNI. In case of kubenet watch step 5

--node-count 1 ` #this should be at least 3 in production

--node-osdisk-type Ephemeral ` #watch that your VM SKU supports ephemeral disk

--node-vm-size <VM SKU> `

--nodepool-name <name of the nodepool> `

--outbound-type loadBalancer ` #you can define UDR as well if your security requires it

--pod-cidr <valid CIDR range> ` #note only kubenet requires this, don’t use this for Azure CNI

--service-cidr <valid CIDR range> ` #note that dns-service-ip must be within this range

--uptime-sla `

--vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<RGName>/providers/Microsoft.Network/virtualNetworks/<VnetName>/subnets/<subnetName> ` #deployment can create this for you if omitted

--workspace-resource-id /subscriptions/<subscriptionId>/resourceGroups/<RGName>/providers/microsoft.operationalinsights/workspaces/<LogAnalyticsWorkspaceName> ` #deployment can create this for you if omitted

--yes `


After deployment you should have an AKS resource and a managed resource group that holds all other managed resources (e.g. load balancer, public IP, route table, NSG, managed identities, etc.).

In the example below:

Resource group = MuleSoft_AKS_test

Managed resource group = MC_MuleSoft_AKS_test_MuleSoftAKStest_westeurope


And here is the AKS resource created:


Step 2: Setup AKS management


With all new K8s clusters you need to update your kubeconfig file so you can reach the freshly deployed API server. As AAD integration has been enabled it is wise to set an admin AAD group under Cluster configuration (see pic below) and then update the kubeconfig file by running the following in Cloud Shell.


az aks get-credentials --resource-group <AKS resource group name> --name <AKS cluster name>

Step 3: Create Runtime Fabric in Anypoint Control Plane and deploy RTF components onto AKS


Deploying Runtime Fabric is straight forward and you can follow the steps outlined by the MuleSoft documentation  https://docs.mulesoft.com/runtime-fabric/latest/install-self-managed


Step 4: Deploy an Azure Application Gateway


Deploying Application Gateway is not different whether you want to use the ingress controller feature or not. However, if you want to use this feature than you must use version 2. In the example below an existing subnet is used, but you can always let the deployment create this for you if you’d prefer.


az network application-gateway create `

--name <AGIC name>`

--resource-group <AGIC resource group name>` #must already exist

--location <region>`

--public-ip-address <name of the PIP resource>` #note that you can use an already existing public IP but you must create one

--sku Standard_v2 ` #you can opt for using WAF or not

--subnet /subscriptions/<subscriptionId>/resourceGroups/<RGName>/providers/Microsoft.Network/virtualNetworks/<VnetName>/subnets/<subnetName> ` #deployment can create this for you if omitted


After deployment you should have an Azure Application Gateway resource, however there is no connection between this and the previously deployed AKS. This is set up in the next step. Note that you can create Application Gateway and AKS resources together in which case the connection between them would be already set. To do so you need to modify the az aks create script and add flags to define the AGIC resource.


Here is the Application Gateway, which was created (note that a new public IP is also assigned):

Step 5: Setup Ingress Contoller feature between AGIC and AKS


If you check the AKS resource you would see under Networking that no Application Gateway ingress controller is set up. You need to do two steps to fully set up the connection between the two resources.

  1. Associate the newly created route table with the AGIC subnet (only relevant for kubenet!)

  2. Either on the portal or from code enable the ingress contoller add-on on AKS

Code example:

$appgwId=$(az network application-gateway show -n <AGIC name> -g <AGIC resource group> -o tsv --query "id")

az aks enable-addons -n <AKS name> -g <AKS resource group> -a ingress-appgw --appgw-id $appgwId


After successfully setting up the ingress controller you will see that on the portal as well. In case of kubenet do NOT forget to associate the route table with the AGIC’s subnet as well, otherwise traffic will not be routed and no traffic will reach your AKS cluster.


Step 6: Deploy sample application to RTF


Here an extremely basic application was used that has no policies, and no-auth at all. If the correct endpoint is called then it responds with HTTP 200 and the following message:


{

"Message": "Application is up and running"

}


Deploying an application is no different from any other scenario you can do via Anypoint Control Plane. Do NOT forget to associate your newly created RTF to an environment before and deploy an app manually or via a CD pipeline. It’s up to you, however you should have a running application at the end.


Here is the one used:

Step 7: Configure ingress


This step also has multiple stages:

  1. You need to upload a certificate to the AGIC so secure https traffic can be established

  2. You need to specify an ingress definition and deploy it to the AKS cluster

Note that it is possible to use ingress templates to create ingresses for each application you deploy.

In the first step the upload is possible with the following script:

az network application-gateway ssl-cert create `

--gateway-name <AGIC name> `

--name <cert name> ` #note that this has to be referenced in the ingress yaml

--resource-group <AGIC resource group> `

--cert-file <path to your pfx file> `

--cert-password $Password #$Password is set to contain the pfx cert chain password


After successfully uploading the required certificate chain, it becomes available for listeners to be used.

In the second step a valid ingress definition has to be created and applied in the AKS cluster. As there are a ton of options for how an ingress can be configured, only a sample can be provided here but in the end the ingress has to be adjusted to company policies and standards.

Here is the ingress example used:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: demoagic

namespace: <namespace id>

annotations:

kubernetes.io/ingress.class: azure/application-gateway

appgw.ingress.kubernetes.io/health-probe-path: "/api/healthCheck" #ex. path

appgw.ingress.kubernetes.io/health-probe-port: "8081" #example port

appgw.ingress.kubernetes.io/appgw-ssl-certificate: "<cert name>"

spec:

rules:

- host: <hostname>

http:

paths:

- path: /api/healthCheck #example path

backend:

service:

name: healthcheck #example service name

port:

number: 8081 #example port

pathType: Prefix


Take note of the annotations appgw-ssl-certificate, which references the name of the cert uploaded (these must match!), and ingress-class: azure/application-gateway that defines that the AGIC has to be used and the add-on deployed to AKS will manage the configuration of AGIC.


Step 8: Test and validate


This is the final step and a true validation whether everything was set up correctly or not. As mentioned before the desired state was to have a secure traffic flow to the app routing via the Application Gateway’s Public IP to the AKS cluster pod.

In the picture below you can see that a secure connection is set up (note the https lock) and the app responds as expected.



Finally we can validate whether there’s a healthy flow in the AGIC backend health check.


Hopefully this was easy to follow and understand and you got a step closer to running all your MuleSoft workflows on a self-managed K8s cluster in Azure with a dedicated external ingress controller.


Please let us know if you have any questions and we’ll surely try to answer those as soon as possible!

Comments


bottom of page