In this blog, we will  discuss the importance of mutual TLS (mTLS) in securing communication between clients and servers within an AKS cluster. We will explore how mTLS adds an extra layer of security by requiring both the client and server to authenticate each other’s certificates.

Overview:

Azure Kubernetes Service (AKS) provides a managed Kubernetes environment, which simplifies the deployment, management, and operations of Kubernetes. An essential feature of AKS is its support for ingress, which allows external users to access HTTP and HTTPS routes to services within an AKS cluster.

In my previous post, we explored 3 options to secure the connection between API management and AKS. Below are the 3 options we discussed, along with use case, pros and cons.

1.       Expose Services publicly.

2.       Install an Ingress Controller(mTLS)

3.       Deploy APIM inside the cluster VNet

To check all the details refer the link: Options to secure the connection between API Management and AKS

In this post, we will see how to implement Option 2 install an Ingress Controller(mTLS)

Install an Ingress Controller:

The basic ingress in AKS involves the following steps:

  • Creating an AKS cluster with Azure CLI.
  • Deploying an application to run in the cluster.
  • Configuring an Ingress Controller, like NGINX, to manage external access to the services.
  • Defining Ingress Resources that specify the URL paths and backend services to route traffic to.

Solution: Deploy AKS and API Management with mTLS

This solution demonstrates how to integrate Azure Kubernetes Service (AKS) and Azure API Management via mutual TLS (mTLS) in an architecture that provides end-to-end encryption. 

Ref: https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/mutual-tls-deploy-aks-api-management

Scenario details

You can use this solution to integrate AKS and API Management via mTLS in an architecture that provides end-to-end encryption.

Potential use cases

  • AKS integration with API Management and Application Gateway, via mTLS.
  • End-to-end mTLS between API Management and AKS.
  • High security deployments for organizations that need end-to-end TLS. For example, organizations in the financial sector can benefit from this solution.

You can use this approach to manage the following scenarios:

  • Deploy API Management in internal mode and expose APIs by using Application Gateway.
  • Configure mTLS and end-to-end encryption for high security and traffic over HTTPS.
  • Connect to Azure PaaS services by using an enhanced security private endpoint.
  • Implement Defender for Containers security.

Dataflow

  1. A user makes a request to the application endpoint from the internet.
  2. Azure Application Gateway receives traffic as HTTPS and presents a PFX certificate previously loaded from Azure Key Vault to the user.
  3. Application Gateway uses private keys to decrypt traffic (SSL offload), performs web application firewall inspections, and re-encrypts traffic by using public keys (end-to-end encryption).
  4. Application Gateway applies rules and backend settings based on the backend pool and sends traffic to the API Management backend pool over HTTPS.
  5. API Management is deployed in internal virtual network mode (Developer or Premium tier only) with a private IP address. It receives traffic as HTTPS with custom domain PFX certificates.
  6. Microsoft Entra ID provides authentication and applies API Management policies via OAuth and client certificate validation. To receive and verify client certificates over HTTP/2 in API Management, you need to enable Negotiate client certificate on the Custom domains blade in API Management.
  7. API Management sends traffic via HTTPS to an ingress controller for an AKS private cluster.
  8. The AKS ingress controller receives the HTTPS traffic and verifies the PEM server certificate and private key. Most enterprise-level ingress controllers support mTLS. Examples include NGINX and AGIC.
  9. The ingress controller processes TLS secrets (Kubernetes Secrets) by using cert.pem and key.pem. The ingress controller decrypts traffic by using a private key (offloaded). For enhanced-security secret management that’s based on requirements, CSI driver integration with AKS is available.
  10. The ingress controller re-encrypts traffic by using private keys and sends traffic over HTTPS to AKS pods. Depending on your requirements, you can configure AKS ingress as HTTPS backend or passthrough.

Components

  • Application Gateway. Application Gateway is a web traffic load balancer that you can use to manage traffic to web applications.
  • AKS. AKS provides fully managed Kubernetes clusters for deployment, scaling, and management of containerized applications.
  • Azure Container Registry. Container Registry is a managed, private Docker registry service on Azure. You can use Container Registry to store private Docker images, which are deployed to the cluster.
  • Microsoft Entra ID. When AKS is integrated with Microsoft Entra ID, you can use Microsoft Entra users, groups, or service principals as subjects in Kubernetes RBAC to manage AKS resources.
    • Managed identities. Microsoft Entra managed identities eliminate the need to manage credentials like certificates, secrets, and keys.
  • Azure SQL Database. SQL Database is a fully managed and intelligent relational database service that’s built for the cloud. You can use SQL Database to create a high-availability, high-performance data storage layer for your modern cloud applications.
  • Azure Cosmos DB. Azure Cosmos DB is a fully managed NoSQL database service for building and modernizing scalable, high-performance applications.
  • API Management. You can use API Management to publish APIs to your developers, partners, and employees.
  • Azure Private Link. Private Link provides access to PaaS services that are hosted on Azure, so you can keep your data on the Microsoft network.
  • Key Vault. Key Vault can provide enhanced security for keys and other secrets.
  • Defender for Cloud. Defender for Cloud is a solution for cloud security posture management and cloud workload protection. It finds weak spots across your cloud configuration, helps strengthen the security of your environment, and can protect workloads across multicloud and hybrid environments from evolving threats.
  • Azure Monitor. You can use Monitor to collect, analyze, and act on telemetry data from your Azure and on-premises environments. Monitor helps you maximize the performance and availability of your applications and proactively identify problems.
    • Log Analytics. You can use Log Analytics to edit and run log queries with data in Azure Monitor logs.
    • Application Insights. Application Insights is an extension of Azure Monitor. It provides application performance monitoring.
  • Microsoft Sentinel. Microsoft Sentinel is a cloud-native security information and event manager platform that uses built-in AI to help you analyze large volumes of data.
  • Azure Bastion. Azure Bastion is a fully managed service that provides RDP and SSH access to VMs without any exposure through public IP addresses. You can provision the service directly in your local or peered virtual network to get support for all VMs in that network.
  • Azure Private DNS. You can use Private DNS to manage and resolve domain names in a virtual network without adding a custom DNS solution.

Solution Options: AKS with TLS Ingress VS App Gateway Ingress with APIM 

There are other alternatives to Option 2 (AKS with TLS Ingress). You can use App Gateway Ingress with APIM.

Lets see a comparation as shown below.

FeatureAKS with TLS IngressApp Gateway Ingress with APIM
Security and AuthenticationSupports mutual TLS authentication natively. Authentication can be performed at the Ingress Controller.Offers end-to-end encryption, integrates with Azure services for authentication, and applies API Management policies.
Traffic ManagementManages traffic directly to the AKS cluster, avoiding multiple hops before requests reach the cluster.Performs SSL offload, web application firewall inspections, and re-encrypts traffic with rules based on the backend pool.
Deployment and OperationRequires knowledge of Kubernetes Services and APIs, and mapping these services to APIs in API Management.Involves deploying API Management in internal virtual network mode with a private IP address, receiving HTTPS traffic with custom domain PFX certificates.
Microservices ExposureSuitable for microservices exposed to internal or external clients, addressing cross-cutting concerns like authentication and monitoring.Ideal for scenarios where microservices are not exposed publicly and other services in the Kubernetes cluster can use the same API Management instance.

Demo Overview

This Lab is divided into 2 parts:

Part1:Create AKS Attach AKS to ACR Configure Basic Ingress Controller. Validate the Ingress Controller/LB Create an unmanaged ingress controller – Azure Kubernetes Service | Microsoft Learn  
Part2:Use TLS with your own certificates with Secrets Store CSI Driver Use TLS with Let’s Encrypt certificates Ingress controller configuration options Add an A record to your DNS zone Configure an FQDN for your ingress controller Install cert-manager Create a CA cluster issuer Update your ingress routes Verify a certificate object has been created Test the ingress configuration Use TLS with an ingress controller on Azure Kubernetes Service (AKS) – Azure Kubernetes Service | Microsoft Learn    

Part1: Create an Unmanaged Ingress Controller:

An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services. When you use an ingress controller and ingress rules, a single IP address can be used to route traffic to multiple services in a Kubernetes cluster.

This post shows you how to deploy the NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster. Two applications are then run in the AKS cluster, each of which is accessible over the single IP address.

Ref: Create an unmanaged ingress controller – Azure Kubernetes Service | Microsoft Learn

Part2: Use TLS with an ingress controller on Azure Kubernetes Service (AKS)

The transport layer security (TLS) protocol uses certificates to provide security for communication, encryption, authentication, and integrity. Using TLS with an ingress controller on AKS allows you to secure communication between your applications and experience the benefits of an ingress controller.

You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can use cert-manager, which automatically generates and configures Let’s Encrypt certificates. Two applications run in the AKS cluster, each of which is accessible over a single IP address.

There are two open source ingress controllers for Kubernetes based on Nginx:

one is maintained by the Kubernetes community (kubernetes/ingress-nginx), and

one is maintained by NGINX, Inc. (nginxinc/kubernetes-ingress).

We will focus on this option later and we will be using the Kubernetes community ingress controller.

Ref: Use TLS with an ingress controller on Azure Kubernetes Service (AKS) – Azure Kubernetes Service | Microsoft Learn

 Demo: Create an unmanaged ingress controller

Lets jump to the Demo part and explore all the details.

Prerequisites:

Helm3:

This post uses Helm 3 to install the NGINX ingress controller on a supported version of Kubernetes.

ACR:  

ACR is used for AKS cluster.

Kubernetes API health endpoint

healthz was deprecated in Kubernetes v1.16. You can replace this endpoint with the livez and readyz endpoints instead.

Azure CLI

 Azure CLI version 2.0.64 or later

Azure PowerShell

Azure PowerShell version 5.9.0 or later

Demo (Summary)

  1. Create AKS
  2. Create a new AKS cluster and integrate with an existing ACR
  3. Import an image into your ACR
  4. Deploy the sample image from ACR to AKS:
  5. Create Ingress Controller
  6. Check the load balancer service
  7. Run demo applications.
  8. Create an ingress route
  9. Test the ingress controller

Demo (Step-by-step): Create AKS and configure Unmanaged Ingress controller

1.       Create a new ACR

MYACR=mycontainerregistry 
az acr create -n $MYACR -g myContainerRegistryResourceGroup --sku basic

2.       Create a new AKS cluster and integrate with an existing ACR

Create a new AKS cluster and integrate with an existing ACR using the az aks create command with the –attach-acr parameter.

 This command allows you to authorize an existing ACR in your subscription and configures the appropriate AcrPull role for the managed identity.

MYACR=mycontainerregistry 
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr $MYACR

If you’re using an ACR located in a different subscription from your AKS cluster or would prefer to use the ACR resource ID instead of the ACR name, you can do so using the following syntax:

# Attach using acr-resource-id
az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr <acr-resource-id>
 
# Attach using acr-resource-id
az aks update -n myAKSCluster -g myResourceGroup --attach-acr <acr-resource-id>

 Example:

az aks create -n myAKSCluster -g myResourceGroup --generate-ssh-keys --attach-acr /subscriptions/<subscription Name- enter here>/resourceGroups/ myResourceGroup/providers/Microsoft.ContainerRegistry/registries/mycontainerregistryrks
 
az aks update -n myAKSCluster -g myResourceGroup --attach-acr /subscriptions/<subscription Name >/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/mycontainerregistryrks

3.       Import an image into your ACR:

az acr import  -n <acr-name> --source docker.io/library/nginx:latest --image nginx:v1 
 
az acr import  -n mycontainerregistryrks --source docker.io/library/nginx:latest --image nginx:v1

4.       Deploy the sample image from ACR to AKS:

Ensure you have the proper AKS credentials using the az aks get-credentials command.

Create a file called acr-nginx.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx0-deployment
  labels:
    app: nginx0-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx0
  template:
    metadata:
      labels:
        app: nginx0
    spec:
      containers:
      - name: nginx
        image: <acr-name>.azurecr.io/nginx:v1
        ports:
        - containerPort: 80

You can create a new file in Azure CLI using Bash command by following these steps:

  • Open Azure Cloud Shell or a local install of the Azure CLI.
  • Use the vi command followed by the filename to create a new file. For example, if you want to create a file named helloworld, you would use the command:
  • vi helloworld
  • When the file opens, press i to switch to insert mode.
  • Type your content into the file.
  • Press Esc to exit insert mode.
  • Type :wq and then press Enter to save the file and quit1

5.       Create Ingress Controller 

ID#Step NameDesc
Step BCreate Ingress ControllerCreate an unmanaged ingress controller – Azure Kubernetes Service | Microsoft Learn    

5.1.  Create Ingress Controller -Basic configuration

To create a basic NGINX ingress controller without customizing the defaults, you’ll use Helm. The following configuration uses the default configuration for simplicity. You can add parameters for customizing the deployment, like --set controller.replicaCount=3.

NAMESPACE=ingress-basic 
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 
helm repo update 
 
helm install ingress-nginx ingress-nginx/ingress-nginx \ 
--create-namespace \ 
--namespace $NAMESPACE \ 
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \ 
--set controller.service.externalTrafficPolicy=Local

Code:

rajeev [ ~ ]$ NAMESPACE=ingress-basic 
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update 
helm install ingress-nginx ingress-nginx/ingress-nginx \
  --create-namespace \
  --namespace $NAMESPACE \
  --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz \
  --set controller.service.externalTrafficPolicy=Local
"ingress-nginx" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
Update Complete. ⎈Happy Helming!
NAME: ingress-nginx
LAST DEPLOYED: Tue Mar  5 23:19:51 2024
NAMESPACE: ingress-basic
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the load balancer IP to be available.
You can watch the status by running 'kubectl get service --namespace ingress-basic ingress-nginx-controller --output wide --watch'
 An example Ingress that makes use of the controller:
  apiVersion: networking.k8s.io/v1
  kind: Ingress
  metadata:
    name: example
    namespace: foo
  spec:
    ingressClassName: nginx
    rules:
      - host: www.example.com
        http:
          paths:
            - pathType: Prefix
              backend:
                service:
                  name: exampleService
                  port:
                    number: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
      - hosts:
        - www.example.com
        secretName: example-tls
 If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
   apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls

Welcome – Ingress-Nginx Controller (kubernetes.github.io)

5.2.  Customized configuration

As an alternative to the basic configuration presented in the above section, the next set of steps will show how to deploy a customized ingress controller.

You’ll have the option of using an internal static IP address, or using a dynamic public IP address.

REGISTRY_NAME=<REGISTRY_NAME> 
SOURCE_REGISTRY=registry.k8s.io 
CONTROLLER_IMAGE=ingress-nginx/controller 
CONTROLLER_TAG=v1.8.1 
PATCH_IMAGE=ingress-nginx/kube-webhook-certgen 
PATCH_TAG=v20230407 
DEFAULTBACKEND_IMAGE=defaultbackend-amd64 
DEFAULTBACKEND_TAG=1.5 
 
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$CONTROLLER_IMAGE:$CONTROLLER_TAG --image $CONTROLLER_IMAGE:$CONTROLLER_TAG 
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$PATCH_IMAGE:$PATCH_TAG --image $PATCH_IMAGE:$PATCH_TAG 
az acr import --name $REGISTRY_NAME --source $SOURCE_REGISTRY/$DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG --image $DEFAULTBACKEND_IMAGE:$DEFAULTBACKEND_TAG

5.3.  Create an ingress controller

To create the ingress controller, use Helm to install ingress-nginx. The ingress controller needs to be scheduled on a Linux node. Windows Server nodes shouldn’t run the ingress controller. A node selector is specified using the --set nodeSelector parameter to tell the Kubernetes scheduler to run the NGINX ingress controller on a Linux-based node.

For added redundancy, two replicas of the NGINX ingress controllers are deployed with the --set controller.replicaCount parameter. To fully benefit from running replicas of the ingress controller, make sure there’s more than one node in your AKS cluster.

The following example creates a Kubernetes namespace for the ingress resources named ingress-basic and is intended to work within that namespace. Specify a namespace for your own environment as needed. If your AKS cluster isn’t Kubernetes role-based access control enabled, add --set rbac.create=false to the Helm commands.

It has 2 steps

(1) Import the images used by the Helm chart into your ACRTo control image versions, you’ll want to import them into your own Azure Container Registry. The NGINX ingress controller Helm chart relies on three container images. Use az acr import to import those images into your ACR.    
(2). Create an ingress controller      To create the ingress controller, use Helm to install ingress-nginx.   The ingress controller needs to be scheduled on a Linux node.
(3).b Create an ingress controller using an internal IP address      By default, an NGINX ingress controller is created with a dynamic public IP address assignment. A common configuration requirement is to use an internal, private network and IP address. This approach allows you to restrict access to your services to internal users, with no external access.  

Note: We will escape the Customized configuration and use the basic configuration.

6.       Check the load balancer service

Check the load balancer service by using kubectl get services.

rajeev [ ~ ]$ kubectl get services --namespace ingress-basic -o wide -w ingress-nginx-controller
NAME                       TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE   SELECTOR
ingress-nginx-controller   LoadBalancer   10.0.91.212   4.156.24.80   80:31347/TCP,443:31380/TCP   19m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

When the Kubernetes load balancer service is created for the NGINX ingress controller, an IP address is assigned under EXTERNAL-IP, as shown in the following example output:

Check this: 4.156.24.80

If you browse to the external IP address at this stage, you see a 404 page displayed. This is because you still need to set up the connection to the external IP, which is done in the next sections.

7.       Run demo applications.

To see the ingress controller in action, run two demo applications in your AKS cluster. In this example, you use kubectl apply to deploy two instances of a simple Hello world application.

Create an aks-helloworld-one.yaml file and copy in the following example YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-one  
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-one
  template:
    metadata:
      labels:
        app: aks-helloworld-one
    spec:
      containers:
      - name: aks-helloworld-one
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "Welcome to Azure Kubernetes Service (AKS)"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-one  
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-one

Use either the VI editor or simple, upload the file.

Run the two demo applications using kubectl apply:

kubectl apply -f aks-helloworld-one.yaml --namespace ingress-basic
kubectl apply -f aks-helloworld-two.yaml --namespace ingress-basic

8.       Create an ingress route

Both applications are now running on your Kubernetes cluster. To route traffic to each application, create a Kubernetes ingress resource. The ingress resource configures the rules that route traffic to one of the two applications.

In the following example, traffic to EXTERNAL_IP/hello-world-one is routed to the service named aks-helloworld-one.

Traffic to EXTERNAL_IP/hello-world-two is routed to the aks-helloworld-two service. Traffic to EXTERNAL_IP/static is routed to the service named aks-helloworld-one for static assets.

Create a file named hello-world-ingress.yaml and copy in the following example YAML:

 apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /hello-world-one(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
      - path: /hello-world-two(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-two
            port:
              number: 80
      - path: /(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress-static
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /static(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port: 
              number: 80

Create the ingress resource using the kubectl apply command.

kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic

9.       Test the ingress controller

To test the routes for the ingress controller, browse to the two applications. Open a web browser to the IP address of your NGINX ingress controller, such as EXTERNAL_IP. The first demo application is displayed in the web browser, as shown in the following example:

kubectl get services –namespace ingress-basic -o wide -w ingress-nginx-controller

4.156.24.80

To test the routes for the ingress controller, browse to the two applications.

Open a web browser to the IP address of your NGINX ingress controller, such as EXTERNAL_IP.

The first demo application is displayed in the web browser, as shown in the following example:

Now add the /hello-world-two path to the IP address, such as EXTERNAL_IP/hello-world-two. The second demo application with the custom title is displayed:

Test an internal IP address

Create a test pod and attach a terminal session to it.

kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic

Install curl in the pod using apt-get.

apt-get update && apt-get install -y curl

Access the address of your Kubernetes ingress controller using curl, such as http://10.224.0.42. Provide your own internal IP address specified when you deployed the ingress controller.

Clean up resources

This article used Helm to install the ingress components and sample apps. When you deploy a Helm chart, many Kubernetes resources are created. These resources include pods, deployments, and services. To clean up these resources, you can either delete the entire sample namespace, or the individual resources.

Delete the sample namespace and all resources

To delete the entire sample namespace, use the kubectl delete command and specify your namespace name. All the resources in the namespace are deleted.

kubectl delete namespace ingress-basic

Conclusion

Ingress in Azure Kubernetes Service (AKS) is a pivotal feature for developers and organizations looking to expose their applications to the outside world This post provides a clear pathway for setting up ingress, enabling secure and efficient access to services running within the cluster.

The process begins with the deployment of an AKS cluster, followed by the installation of an Ingress Controller, such as NGINX. This controller acts as a gateway, directing incoming traffic to the appropriate services based on the rules defined in Ingress Resources. These resources are crucial for defining the URL paths that external users can access, ensuring that traffic is routed correctly and securely.

The post emphasizes the importance of Helm for deploying and managing the Ingress Controller, showcasing its ease of use and flexibility. With Helm, developers can quickly deploy the necessary components and focus on the more critical aspects of their applications.

By following the steps outlined in the post, developers can confidently expose their applications to the internet, leveraging AKS’s robust and scalable infrastructure. The integration of AKS with ingress controllers like NGINX provides a seamless experience for managing external access, offering a balance between flexibility and control.

In conclusion, the AKS ingress basics post  is an invaluable resource for anyone looking to understand and implement ingress in a Kubernetes environment.

 It lays the foundation for building accessible, secure, and scalable web applications, making it an essential guide for modern cloud-native development.

Happy coding! 🚀🔐👨‍💻

 

References:

https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/mutual-tls-deploy-aks-api-management

https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli

https://learn.microsoft.com/en-us/azure/aks/ingress-tls?tabs=azure-cli

https://kubernetes.github.io/ingress-nginx

Leave a Reply

Discover more from Rajeev Singh | Coder, Blogger, YouTuber

Subscribe now to keep reading and get access to the full archive.

Continue reading