Back to Blog
3 min read

Azure Kubernetes Service Networking Deep Dive

Azure Kubernetes Service (AKS) networking can seem complex at first, but understanding the fundamentals is crucial for deploying production-ready applications. In this post, I’ll walk you through the key networking concepts and configurations in AKS.

Network Models in AKS

AKS supports two primary network models: kubenet and Azure CNI (Container Network Interface).

Kubenet (Basic Networking)

Kubenet is the default networking option. With kubenet:

  • Nodes receive an IP address from the Azure virtual network subnet
  • Pods receive an IP address from a logically different address space
  • Network address translation (NAT) is configured so pods can reach resources on the Azure virtual network
# Create an AKS cluster with kubenet networking
az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --network-plugin kubenet \
    --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet \
    --pod-cidr 10.244.0.0/16 \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip 10.0.0.10 \
    --docker-bridge-address 172.17.0.1/16 \
    --generate-ssh-keys

Azure CNI (Advanced Networking)

Azure CNI provides every pod with an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space.

# Create an AKS cluster with Azure CNI
az aks create \
    --resource-group myResourceGroup \
    --name myAKSCluster \
    --network-plugin azure \
    --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip 10.0.0.10 \
    --docker-bridge-address 172.17.0.1/16 \
    --generate-ssh-keys

Network Policies

Network policies in Kubernetes allow you to control traffic flow between pods. AKS supports both Azure Network Policy and Calico.

# network-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-ingress
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080

Apply the network policy:

kubectl apply -f network-policy.yaml

Ingress Controllers

An ingress controller is essential for routing external traffic to your services. NGINX is a popular choice.

# Install NGINX Ingress Controller using Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

helm install nginx-ingress ingress-nginx/ingress-nginx \
    --namespace ingress-basic \
    --create-namespace \
    --set controller.replicaCount=2 \
    --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
    --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux

Define an ingress resource:

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - myapp.example.com
    secretName: tls-secret
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-service
            port:
              number: 80

Internal Load Balancers

For services that should only be accessible within your virtual network:

# internal-lb-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: internal-app
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: internal-app

For enhanced security, you can create a private AKS cluster:

# Create a private AKS cluster
az aks create \
    --resource-group myResourceGroup \
    --name myPrivateAKSCluster \
    --load-balancer-sku standard \
    --enable-private-cluster \
    --network-plugin azure \
    --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet \
    --docker-bridge-address 172.17.0.1/16 \
    --dns-service-ip 10.2.0.10 \
    --service-cidr 10.2.0.0/24 \
    --generate-ssh-keys

Conclusion

AKS networking offers flexibility to match your organization’s requirements. Whether you need simple kubenet networking for development or Azure CNI with network policies for production, AKS has you covered. The key is understanding your requirements around IP address management, security, and integration with existing Azure resources.

In future posts, I’ll cover more advanced topics like service mesh integration and egress traffic control.

Michael John Pena

Michael John Pena

Senior Data Engineer based in Sydney. Writing about data, cloud, and technology.