Skip to main content

Posts

Showing posts from 2019

How to deploy a Web App on Kubernetes cluster using Azure Kubernetes Service

I deployed my webapp on Azure and found that the ease and simplicity of deploying and managing an Azure Webapp Service does not come cheap!  So next best option was to deploy it on Kubernetes using Azure Kubernetes Service. It involves a few more steps than webapp service, but then it is cheaper and if I want to move it to Google Container Engine, I don't have to break into sweat.  So here are the steps I followed: Created a new Resource Group named firstResourceGroup. az group create --name firstResourceGroup --location eastus Created new ACR in this new resource group. Let's call it 'firstContainerRegistry'.  az acr create --resource-group firstResourceGroup --name firstContainerRegistry --sku Basic On successful completion, the output is like this: {   "adminUserEnabled": false,   "creationDate": "2019-04-24T05:03:32.564208+00:00",   "id": "/subscriptions/1900743b-c1ab-48cd-9951-eb03f5c237

Hadoop for beginners, by a beginner (Part 2) - Hadoop Architecture explained easy

If you have gone through part 1 of this series, you have used Cloudera's Hadoop Quickstart VM to setup a working instance of Hadoop and have running instance of various Hadoop services. Now it is a good time to go back to a little theory and see how different pieces fit with each other. HortonWorks, another Hadoop distributor, has got an excellent tutorial for Hadoop and each of its accompanying services, which you see in below image (taken from the above URL). You can go to this page and read about Hadoop in detail. However I am going to summarize and simplify some of the content and definitions which make it easy for a beginner to quickly understand and proceed. OK, so this is the definition of Hadoop on HortonWorks site:  Apache Hadoop® is an open source framework for distributed storage and processing of large sets of data on commodity hardware. You will agree that biggest challenges in any computing are very basic: storage and processing. Processing could be any o

How to delete all records from a Google Cloud Datastore Entity?

Google has recently given a new feature known as 'Dataflow Templates' which have got a number of tasks that you can do without writing any code.   Click the link on top of Dataflow page: You can select task from the list of available options: Select 'Bulk Delete Entities in Cloud Datastore' from the list.  Fill in the required parameters. Instead of table name, it asks for GQL query. For selecting all the entities in a table give "select * from " . IMPORTANT: In the optional parameters, you should specify the namespace/tenant for which you want to delete the entities.  Not specifying namespace will delete ALL the records from the selected table across the tenants .  Once done, press 'Run Job'. Dataflow will launch and do the cleanup. You can read documentation  here . Source code of all templates is also on  github .

How to get heapdump for any microservice running in Docker container in Google App Engine

Getting heapdump for our services is very important for doing any memory profiling to debug any leaks or performance issues. Unfortunately it is not that straight forward. Fortunately, it is not impossible either! We need to SSH in the container in which our service is running. Go to App Engine → Instances page and select your service from the 'services' dropdown. SSH into any of the instances: Ignore this and SSH into VM. Once you SSH, you should be able to see cloud shell as below. If you are logging in for the first time, there may be some questions asking for SSH key. You can leave the key blank and continue. This shell is the Linux VM on which our docker image is running. We need to find out more details about it before we can do anything with it.   There are a bunch of images running. We are interested in the first one ('us.gcr.io/.....') running in a container named 'gaeapp'. Now we need to go inside this docker c