Overview:
Azure API Management is a solution that facilitates the management of APIs across all environments. It supports the complete API lifecycle and is particularly useful when dealing with microservices architectures deployed in AKS. The service helps to abstract the backend architecture complexity from API consumers and provides a consistent configuration for routing, security, throttling, caching, and observability.
When deploying microservices as APIs, Azure API Management serves as a front door, decoupling clients from the microservices and handling cross-cutting concerns like authentication, authorization, and monitoring. It simplifies the management of communication between microservices and their consumers, whether they are internal or external.
The integration of Azure API Management with AKS allows for the quick deployment and operation of a microservices-based architecture in the cloud. It provides a turnkey solution for creating a modern gateway for microservices and publishing them as APIs. This combination offers a platform for deploying, publishing, securing, monitoring, and managing microservices-based APIs.
The Application Gateway Ingress Controller (AGIC) is a Kubernetes application, which makes it possible for Azure Kubernetes Service (AKS) customers to leverage Azure’s native Application Gateway L7 load-balancer to expose cloud software to the Internet. AGIC monitors the Kubernetes cluster it’s hosted on and continuously updates an Application Gateway, so that selected services are exposed to the Internet.
The Ingress Controller runs in its own pod on the customer’s AKS. AGIC monitors a subset of Kubernetes Resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the Azure Resource Manager (ARM).
Benefits of Application Gateway Ingress Controller
AGIC helps eliminate the need to have another load balancer/public IP address in front of the AKS cluster and avoids multiple hops in your datapath before requests reach the AKS cluster.
Application Gateway talks to pods using their private IP address directly and doesn’t require NodePort or KubeProxy services.
This capability also brings better performance to your deployments.
Microservices are perfect for building APIs.
With Azure Kubernetes Service (AKS), you can quickly deploy and operate a microservices-based architecture in the cloud.
You can then leverage Azure API Management (API Management) to publish your microservices as APIs for internal and external consumption
Background
When publishing microservices as APIs for consumption, it can be challenging to manage the communication between the microservices and the clients that consume them. There’s a multitude of cross-cutting concerns such as authentication, authorization, throttling, caching, transformation, and monitoring. These concerns are valid regardless of whether the microservices are exposed to internal or external clients.
The API Gateway pattern addresses these concerns.
Kubernetes Services and APIs
In a Kubernetes cluster, containers are deployed in Pods, which are ephemeral and have a lifecycle. When a worker node dies, the Pods running on the node are lost. Therefore, the IP address of a Pod can change anytime. We can’t rely on it to communicate with the pod.
To solve this problem, Kubernetes introduced the concept of Services. A Kubernetes Service is an abstraction layer which defines a logic group of Pods and enables external traffic exposure, load balancing and service discovery for those Pods.
From <https://learn.microsoft.com/en-us/azure/api-management/api-management-kubernetes>
When we are ready to publish our microservices as APIs through API Management, we need to think about how to map our Services in Kubernetes to APIs in API Management. There are no set rules. It depends on how you designed and partitioned your business capabilities or domains into microservices at the beginning. For instance, if the pods behind a Service are responsible for all operations on a given resource (for example, Customer), the Service may be mapped to one API. If operations on a resource are partitioned into multiple microservices (for example, GetOrder, PlaceOrder), then multiple Services may be logically aggregated into one single API in API management (See Fig. 1).
Deploy API Management in front of AKS
There are a few options of deploying API Management in front of an AKS cluster.
While an AKS cluster is always deployed in a virtual network (VNet), an API Management instance isn’t required to be deployed in a VNet.
When API Management doesn’t reside within the cluster VNet, the AKS cluster has to publish public endpoints for API Management to connect to.
In that case, there’s a need to secure the connection between API Management and AKS. In other words, we need to ensure the cluster can only be accessed exclusively through API Management.
Let’s go through the options.
Summary of these options:
1. Expose Services publicly.
| Option Desc | |
| Expose Services publicly | This might be the easiest option to deploy API Management in front of AKS, especially if you already have authentication logic implemented in your microservices. Services in an AKS cluster can be exposed publicly using Service types of NodePort, LoadBalancer, or ExternalName. In this case, Services are accessible directly from public internet. After deploying API Management in front of the cluster, we need to ensure all inbound traffic goes through API Management by applying authentication in the microservices. |
| Pros | Easy configuration on the API Management side because it doesn’t need to be injected into the cluster Vnet No change on the AKS side if Services are already exposed publicly and authentication logic already exists in microservices |
| Cons | Potential security risk due to public visibility of endpoints No single-entry point for inbound cluster traffic Complicates microservices with duplicate authentication logic |
2. Install an Ingress Controller(mTLS)
| Option Desc | |
| Use Case | If an API Management instance doesn’t reside in the cluster VNet, Mutual TLS authentication (mTLS) is a robust way of ensuring the traffic is secure and trusted in both directions between an API Management instance and an AKS cluster. Mutual TLS authentication is natively supported by API Management and can be enabled in Kubernetes by installing an Ingress Controller As a result, authentication will be performed in the Ingress Controller, which simplifies the microservices. Additionally, you can add the IP addresses of API Management to the allowed list by Ingress to make sure only API Management has access to the cluster. How it works? When you publish APIs through API Management, it’s easy and common to secure access to those APIs by using subscription keys. Developers who need to consume the published APIs must include a valid subscription key in HTTP requests when they make calls to those APIs. Otherwise, the calls are rejected immediately by the API Management gateway. They aren’t forwarded to the back-end services. To get a subscription key for accessing APIs, a subscription is required. A subscription is essentially a named container for a pair of subscription. |
| Pros | Easy configuration on the API Management side because it doesn’t need to be injected into the cluster VNet and mTLS is natively supported Centralizes protection for inbound cluster traffic at the Ingress Controller layer Reduces security risk by minimizing publicly visible cluster endpoints |
| Cons | Increases complexity of cluster configuration due to extra work to install, configure and maintain the Ingress Controller and manage certificates used for mTLS Security risk due to public visibility of Ingress Controller endpoint(s) |
3. Deploy APIM inside the cluster VNet
(1) In some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints.
(2) In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there’s no reason to expose the cluster publicly as all API traffic will remain within the Vnet.
n some cases, customers with regulatory constraints or strict security requirements may find Option 1 and 2 not viable solutions due to publicly exposed endpoints. In others, the AKS cluster and the applications that consume the microservices might reside within the same VNet, hence there’s no reason to expose the cluster publicly as all API traffic will remain within the VNet. For these scenarios, you can deploy API Management into the cluster VNet. API Management Developer and Premium tiers support VNet deployment.
There are two modes of deploying API Management into a VNet – External and Internal.
If API consumers do not reside in the cluster VNet, the External mode (Fig. 4) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.
Deploying API Management into a VNet – External:
If all API consumers reside within the cluster VNet, then the Internal mode (below diagram) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There’s no way to reach the API Management gateway or the AKS cluster from public internet.
Deploying API Management into a VNet – Internal:
| Option Desc | ||
| Pros | The most secure option because the AKS cluster has no public endpoint Simplifies cluster configuration since it has no public endpoint Ability to hide both API Management and AKS inside the VNet using the Internal mode Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG) | |
| Cons | Increases complexity of deploying and configuring API Management to work inside the VNet |
Conclusion
Integrating Azure API Management with Kubernetes, especially Azure Kubernetes Service, offers a seamless way to manage and expose microservices as APIs. This integration is crucial for organizations that rely on microservices architectures, as it simplifies the complexity involved in managing APIs and ensures a secure, scalable, and reliable gateway for API consumption.
Azure API Management acts as a robust front door for microservices, handling essential aspects such as security, load balancing, and cross-cutting concerns, which are vital for maintaining the integrity and performance of the services. By abstracting the backend services, it allows developers to focus on building and evolving their applications without worrying about the underlying infrastructure.
The service’s compatibility with AKS means that developers can leverage the full potential of Kubernetes for orchestrating containerized applications while benefiting from the advanced management capabilities of Azure API Management. This combination not only enhances developer productivity but also accelerates the deployment and scaling of applications in the cloud.
In conclusion, the integration of Azure API Management with AKS represents a powerful alliance that empowers developers to build and manage modern applications more efficiently. It encapsulates the complexities of API management, providing a streamlined and secure approach to exposing microservices as APIs.
For organizations looking to innovate and scale their services, this integration is a key enabler, driving forward the capabilities of cloud-native application development and management. Happy coding! 🚀🔐👨💻
References:
https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview
https://learn.microsoft.com/en-us/azure/api-management/api-management-kubernetes

