Back to Blog
4 min read

Azure Kubernetes Service: Simplifying Container Orchestration

Azure Kubernetes Service (AKS) continues to evolve as the premier container orchestration platform on Azure. As we enter 2021, AKS has matured significantly with features that make running containers at scale more accessible than ever.

Why AKS in 2021?

With containers becoming the standard deployment unit for modern applications, AKS provides:

  • Managed control plane (no cluster management overhead)
  • Integration with Azure services
  • Enterprise-grade security and compliance
  • Cost-effective scaling options

AKS vs Other Container Options

FeatureAKSContainer InstancesApp Service
KubernetesFull accessNoneNone
ScalingAuto (HPA/KEDA)ManualAuto
NetworkingFull controlBasicManaged
Best forMicroservicesSimple containersWeb apps

Create AKS Cluster

# Create resource group
az group create --name myRG --location eastus

# Create AKS cluster
az aks create \
    --resource-group myRG \
    --name myAKSCluster \
    --node-count 3 \
    --enable-addons monitoring \
    --generate-ssh-keys

# Get credentials
az aks get-credentials --resource-group myRG --name myAKSCluster

Deploying Applications

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-api
  template:
    metadata:
      labels:
        app: my-api
    spec:
      containers:
      - name: my-api
        image: myacr.azurecr.io/my-api:v1
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: "100m"
            memory: "128Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"
        env:
        - name: CONNECTION_STRING
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: connection-string
---
apiVersion: v1
kind: Service
metadata:
  name: my-api-service
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: my-api
kubectl apply -f deployment.yaml

Horizontal Pod Autoscaler

# hpa.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-api-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-api
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

Cluster Autoscaler

# Enable cluster autoscaler
az aks update \
    --resource-group myRG \
    --name myAKSCluster \
    --enable-cluster-autoscaler \
    --min-count 1 \
    --max-count 10

Azure Container Registry Integration

# Create ACR
az acr create --resource-group myRG --name myACR --sku Standard

# Attach ACR to AKS
az aks update \
    --resource-group myRG \
    --name myAKSCluster \
    --attach-acr myACR

# Build and push image
az acr build --registry myACR --image my-api:v1 .

Secrets Management

# Create Kubernetes secret
kubectl create secret generic app-secrets \
    --from-literal=connection-string='Server=myserver;...'

# Or use Azure Key Vault integration
az aks enable-addons \
    --resource-group myRG \
    --name myAKSCluster \
    --addons azure-keyvault-secrets-provider
# Using Key Vault with CSI driver
apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: azure-kvname
spec:
  provider: azure
  parameters:
    keyvaultName: "myKeyVault"
    objects: |
      array:
        - |
          objectName: connection-string
          objectType: secret
    tenantId: "<tenant-id>"

Ingress Controller

# Install NGINX ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/cloud/deploy.yaml
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-api-service
            port:
              number: 80

Monitoring with Azure Monitor

# Enable monitoring
az aks enable-addons \
    --resource-group myRG \
    --name myAKSCluster \
    --addons monitoring \
    --workspace-resource-id <workspace-id>
# Prometheus annotations for scraping
metadata:
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8080"
    prometheus.io/path: "/metrics"

Network Policies

# Restrict traffic between namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces
  namespace: production
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}

Best Practices

  1. Use namespaces - Separate environments and teams
  2. Set resource limits - Prevent runaway containers
  3. Enable monitoring - Visibility is essential
  4. Use managed identities - Avoid secrets in config
  5. Plan for scaling - Configure HPA and cluster autoscaler

Cost Optimization

TipBenefit
Spot node pools60-80% savings
Right-size nodesMatch workload needs
Scale to zero (dev)No idle costs
Reserved instancesPredictable workloads

AKS: enterprise Kubernetes without the operational burden.

Michael John Peña

Michael John Peña

Senior Data Engineer based in Sydney. Writing about data, cloud, and technology.