Skip to main content

How to deploy a Web App on Kubernetes cluster using Azure Kubernetes Service

I deployed my webapp on Azure and found that the ease and simplicity of deploying and managing an Azure Webapp Service does not come cheap! 

So next best option was to deploy it on Kubernetes using Azure Kubernetes Service. It involves a few more steps than webapp service, but then it is cheaper and if I want to move it to Google Container Engine, I don't have to break into sweat. 

So here are the steps I followed:

Created a new Resource Group named firstResourceGroup.
az group create --name firstResourceGroup --location eastus

Created new ACR in this new resource group. Let's call it 'firstContainerRegistry'. 
az acr create --resource-group firstResourceGroup --name firstContainerRegistry --sku Basic

On successful completion, the output is like this:
{
  "adminUserEnabled": false,
  "creationDate": "2019-04-24T05:03:32.564208+00:00",
  "id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry",
  "location": "southindia",
  "loginServer": "firstcontainerregistry.azurecr.io",
  "name": "firstContainerRegistry",
  "networkRuleSet": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "firstResourceGroup",
  "sku": {
    "name": "Basic",
    "tier": "Basic"
  },
  "status": null,
  "storageAccount": null,
  "tags": {},
  "type": "Microsoft.ContainerRegistry/registries"
}

Note the 'loginServer' from above output:  firstcontainerregistry.azurecr.io. We will need it in next step. 

Created docker image of the app by running this command:
docker build --tag waterfox83/personastore-flask-webapp:v1.0.0 .

(We have already pushed this image to docker hub using this command: 
docker push waterfox83/personastore-flask-webapp:v1.0.0)

To use the personastore-flask-webapp container image with ACR, the image needs to be tagged with the login server address of our container registry. This tag is used for routing when pushing container images to an image registry.

docker tag
docker tag waterfox83/personastore-flask-webapp:v1.0.0 firstcontainerregistry.azurecr.io/personastore-flask-webapp:v1

Pushed this image to ACR (the ACR url helps understand the destination registry)

So till now, we have created a new Container Registry and put our image in the registry by tagging it with ACR URL and pushing it.
Now we need to actually create a Kubernetes cluster. This is not a one step process unfortunately and we need to take care of the access rights first.  

To access images stored in ACR, we have to grant the AKS service principal the correct rights to pull images from ACR. Below command gives resourceID of the ACR:
az acr show --resource-group firstResourceGroup --name firstContainerRegistry --query "id" --output tsv

and it looks like this: /subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry

Now we create a service principal:
az ad sp create-for-rbac --skip-assignment
{
  "appId": "3e6c413a-4893-437d-9535-ce1d288de5ff",
  "displayName": "azure-cli-2019-04-24-05-43-54",
  "name": "http://azure-cli-2019-04-24-05-43-54",
  "password": "dbc48ad3-ee7c-4e82-b89e-c6d29be23643",
  "tenant": "83faf872-71d4-4def-ba45-29da8b57c092"
}

To grant the correct access for the AKS cluster to pull images stored in ACR, assign the 'AcrPull' role to this service principal for this ACR (notice we are using the resourceId of the firstContainerRegistry we found above):

az role assignment create --assignee 3e6c413a-4893-437d-9535-ce1d288de5ff --scope /subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry --role acrpull

and output should be like this:
{
  "canDelegate": null,
  "id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry/providers/Microsoft.Authorization/roleAssignments/193c6b05-bf4e-4bf5-9180-0f0bec0432be",
  "name": "193c6b05-bf4e-4bf5-9180-0f0bec0432be",
  "principalId": "a87797f5-924e-418a-b668-b3ee5ab6e119",
  "resourceGroup": "firstResourceGroup",
  "roleDefinitionId": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/providers/Microsoft.Authorization/roleDefinitions/7f951dda-4ed3-4680-a7ca-43fe172d538d",
  "scope": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry",
  "type": "Microsoft.Authorization/roleAssignments"
}


Finally we create actual cluster:

az aks create \
    --resource-group firstResourceGroup \
    --name firstCluster \
    --node-count 1 \
    --service-principal \
    --client-secret \
    --generate-ssh-keys
(Replace appId and password with values received when creating service principal)

{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "count": 1,
      "maxPods": 110,
      "name": "nodepool1",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "storageProfile": "ManagedDisks",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "dnsPrefix": "firstCluste-firstResourceGroup-190074",
  "enableRbac": true,
  "fqdn": "firstcluste-firstResourceGroup-190074-2c0deb43.hcp.southindia.azmk8s.io",
  "id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourcegroups/firstResourceGroup/providers/Microsoft.ContainerService/managedClusters/firstCluster",
  "kubernetesVersion": "1.11.9",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXgw/bZ5spvfVtXGIXHYO8Bm751vQEym+mMyZ2WIco27nY6vSD3oYnZtTBXlaWz87M9IKMhhM/US53f2sdebvqzhkRGM/e8PfkujDh4vAe6h1EPkBrTjuQp/+NJsLoeai4reKBsuSdneV0LKQ2kluJBavCk1xz3NI+jQfqDEIAMZVIB8k+Zt6EmsdujMcg66H+MMKV5zovkeWKalOUPhBGT4bYEH8zs94/k2cPLJstOJfzswzI2VJrgkcG50beSwSWKwU9x1AXaMbcj6p+ZAi27359OF1/wUULhZeRdKxeLJu"
        }
      ]
    }
  },
  "location": "southindia",
  "name": "firstCluster",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "networkPlugin": "kubenet",
    "networkPolicy": null,
    "podCidr": "10.244.0.0/16",
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_firstResourceGroup_firstCluster_southindia",
  "provisioningState": "Succeeded",
  "resourceGroup": "firstResourceGroup",
  "servicePrincipalProfile": {
    "clientId": "3e6c786a-4893-437d-9535-ce1d288de5ff",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters"
}

Cluster has got created. Now we need to be able to connect to it using kubectl.
az aks get-credentials --resource-group firstResourceGroup --name firstCluster

Added kubernetes deployment yaml and created deployment.
kubectl apply -f kubernetes-deployment.yaml


And that's it. The app will be deployed and be available on the external IP!

Comments

Popular posts from this blog

How to upload to Google Cloud Storage buckets using CURL

Signed URLs are pretty nifty feature given by Google Cloud Platform to let anyone access your cloud storage (bucket or any file in the bucket) without need to sign in. Official documentation gives step by step details as to how to read/write to the bucket using gsutil or through a program. This article will tell you how to upload a file to the bucket using curl so that any client which doesn't have cloud SDK installed can do this using a simple script. This command creates a signed PUT URL for your bucket. gsutil signurl -c 'text/plain' -m PUT serviceAccount.json gs://test_bucket_location Here is my URL: https://storage.googleapis.com/test_sl?GoogleAccessId=my-project-id@appspot.gserviceaccount.com&Expires=1490266627&Signature=UfKBNHWtjLKSBEcUQUKDeQtSQV6YCleE9hGG%2BCxVEjDOmkDxwkC%2BPtEg63pjDBHyKhVOnhspP1%2FAVSr%2B%2Fty8Ps7MSQ0lM2YHkbPeqjTiUcAfsbdcuXUMbe3p8FysRUFMe2dSikehBJWtbYtjb%2BNCw3L09c7fLFyAoJafIcnoIz7iJGP%2Br6gAUkSnZXgbVjr6wjN%2FIaudXIqA

Running Apache Beam pipeline using Spark Runner on a local standalone Spark Cluster

The best thing about Apache Beam ( B atch + Str eam ) is that multiple runners can be plugged in and same pipeline can be run using Spark, Flink or Google Cloud Dataflow. If you are a beginner like me and want to run a simple pipeline using Spark Runner then whole setup may be tad daunting. Start with Beam's WordCount examples  which help you quickstart with running pipelines using different types of runners. There are code snippets for running the same pipeline using different types of runners but here the code is running on your local system using Spark libraries which is good for testing and debugging pipeline. If you want to run the pipeline on a Spark cluster you need to do a little more work! Let's start by setting up a simple standalone single-node cluster on our local machine. Extending the cluster is as easy as running a command on another machine, which you want to add to cluster. Start with the obvious: install spark on your machine! (Remember to have Java a

Changing Eclipse Workspace Directory

Recently I moved my entire Eclipse installation directory but the workspace was still getting created in the older location only. And worst there was no option to select the Workspace directory in the Window->Options->Workspace menu. To change the workspace location in Eclipse do this. Goto ECLIPSE_HOME\configuration\.settings directory, edit the org.eclipse.ui.ide.prefs file and change the RECENT_WORKSPACES value to the desired location. If you want that Eclipse prompts you to select workspace when you start it, change the SHOW_WORKSPACE_SELECTION_DIALOG value to true. And you are done!