I deployed my webapp on Azure and found that the ease and simplicity of deploying and managing an Azure Webapp Service does not come cheap!
So next best option was to deploy it on Kubernetes using Azure Kubernetes Service. It involves a few more steps than webapp service, but then it is cheaper and if I want to move it to Google Container Engine, I don't have to break into sweat.
So here are the steps I followed:
Created a new Resource Group named firstResourceGroup.
az group create --name firstResourceGroup --location eastus
Created new ACR in this new resource group. Let's call it 'firstContainerRegistry'.
az acr create --resource-group firstResourceGroup --name firstContainerRegistry --sku Basic
On successful completion, the output is like this:
{
"adminUserEnabled": false,
"creationDate": "2019-04-24T05:03:32.564208+00:00",
"id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry",
"location": "southindia",
"loginServer": "firstcontainerregistry.azurecr.io",
"name": "firstContainerRegistry",
"networkRuleSet": null,
"provisioningState": "Succeeded",
"resourceGroup": "firstResourceGroup",
"sku": {
"name": "Basic",
"tier": "Basic"
},
"status": null,
"storageAccount": null,
"tags": {},
"type": "Microsoft.ContainerRegistry/registries"
}
Note the 'loginServer' from above output: firstcontainerregistry.azurecr.io. We will need it in next step.
Created docker image of the app by running this command:
docker build --tag waterfox83/personastore-flask-webapp:v1.0.0 .
(We have already pushed this image to docker hub using this command:
docker push waterfox83/personastore-flask-webapp:v1.0.0)
To use the personastore-flask-webapp container image with ACR, the image needs to be tagged with the login server address of our container registry. This tag is used for routing when pushing container images to an image registry.
docker tag
docker tag waterfox83/personastore-flask-webapp:v1.0.0 firstcontainerregistry.azurecr.io/personastore-flask-webapp:v1
Pushed this image to ACR (the ACR url helps understand the destination registry)
So till now, we have created a new Container Registry and put our image in the registry by tagging it with ACR URL and pushing it.
Now we need to actually create a Kubernetes cluster. This is not a one step process unfortunately and we need to take care of the access rights first.
To access images stored in ACR, we have to grant the AKS service principal the correct rights to pull images from ACR. Below command gives resourceID of the ACR:
az acr show --resource-group firstResourceGroup --name firstContainerRegistry --query "id" --output tsv
and it looks like this: /subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry
Now we create a service principal:
az ad sp create-for-rbac --skip-assignment
{
"appId": "3e6c413a-4893-437d-9535-ce1d288de5ff",
"displayName": "azure-cli-2019-04-24-05-43-54",
"name": "http://azure-cli-2019-04-24-05-43-54",
"password": "dbc48ad3-ee7c-4e82-b89e-c6d29be23643",
"tenant": "83faf872-71d4-4def-ba45-29da8b57c092"
}
To grant the correct access for the AKS cluster to pull images stored in ACR, assign the 'AcrPull' role to this service principal for this ACR (notice we are using the resourceId of the firstContainerRegistry we found above):
az role assignment create --assignee 3e6c413a-4893-437d-9535-ce1d288de5ff --scope /subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry --role acrpull
and output should be like this:
{
"canDelegate": null,
"id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry/providers/Microsoft.Authorization/roleAssignments/193c6b05-bf4e-4bf5-9180-0f0bec0432be",
"name": "193c6b05-bf4e-4bf5-9180-0f0bec0432be",
"principalId": "a87797f5-924e-418a-b668-b3ee5ab6e119",
"resourceGroup": "firstResourceGroup",
"roleDefinitionId": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/providers/Microsoft.Authorization/roleDefinitions/7f951dda-4ed3-4680-a7ca-43fe172d538d",
"scope": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourceGroups/firstResourceGroup/providers/Microsoft.ContainerRegistry/registries/firstContainerRegistry",
"type": "Microsoft.Authorization/roleAssignments"
}
Finally we create actual cluster:
az aks create \
--resource-group firstResourceGroup \
--name firstCluster \
--node-count 1 \
--service-principal \
--client-secret \
--generate-ssh-keys
(Replace appId and password with values received when creating service principal)
{
"aadProfile": null,
"addonProfiles": null,
"agentPoolProfiles": [
{
"count": 1,
"maxPods": 110,
"name": "nodepool1",
"osDiskSizeGb": 100,
"osType": "Linux",
"storageProfile": "ManagedDisks",
"vmSize": "Standard_DS2_v2",
"vnetSubnetId": null
}
],
"dnsPrefix": "firstCluste-firstResourceGroup-190074",
"enableRbac": true,
"fqdn": "firstcluste-firstResourceGroup-190074-2c0deb43.hcp.southindia.azmk8s.io",
"id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c2378d/resourcegroups/firstResourceGroup/providers/Microsoft.ContainerService/managedClusters/firstCluster",
"kubernetesVersion": "1.11.9",
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDXgw/bZ5spvfVtXGIXHYO8Bm751vQEym+mMyZ2WIco27nY6vSD3oYnZtTBXlaWz87M9IKMhhM/US53f2sdebvqzhkRGM/e8PfkujDh4vAe6h1EPkBrTjuQp/+NJsLoeai4reKBsuSdneV0LKQ2kluJBavCk1xz3NI+jQfqDEIAMZVIB8k+Zt6EmsdujMcg66H+MMKV5zovkeWKalOUPhBGT4bYEH8zs94/k2cPLJstOJfzswzI2VJrgkcG50beSwSWKwU9x1AXaMbcj6p+ZAi27359OF1/wUULhZeRdKxeLJu"
}
]
}
},
"location": "southindia",
"name": "firstCluster",
"networkProfile": {
"dnsServiceIp": "10.0.0.10",
"dockerBridgeCidr": "172.17.0.1/16",
"networkPlugin": "kubenet",
"networkPolicy": null,
"podCidr": "10.244.0.0/16",
"serviceCidr": "10.0.0.0/16"
},
"nodeResourceGroup": "MC_firstResourceGroup_firstCluster_southindia",
"provisioningState": "Succeeded",
"resourceGroup": "firstResourceGroup",
"servicePrincipalProfile": {
"clientId": "3e6c786a-4893-437d-9535-ce1d288de5ff",
"secret": null
},
"tags": null,
"type": "Microsoft.ContainerService/ManagedClusters"
}
Cluster has got created. Now we need to be able to connect to it using kubectl.
az aks get-credentials --resource-group firstResourceGroup --name firstCluster
Added kubernetes deployment yaml and created deployment.
kubectl apply -f kubernetes-deployment.yaml
And that's it. The app will be deployed and be available on the external IP!
Comments