OK, whatever way you are passionate about when it comes to updating your remote origin URLs, I have got you covered! Option 1: Go to config file in /.git folder. change url field from url = ssh://git@<old-url> to url = ssh://git@<new-url> Option 2: Go to your repo location, on cmd give git remote set-url origin ssh://git@<new-url>. Option 3: Go to your repository in source tree, click on settings, on Remotes tab in repository settings box, you can select the origin and edit.
You would already be aware that there are multiple options given by Google Cloud Platform to store data. Here is Google documentation on when to use which option:
Google recommends using Google Cloud Storage (GCS) to store static content like files/videos etc. There is something called 'Blobstore' as well which is also used to store such content but it is on the way to being deprecated. This page talks about using GCS to store images.
Look at this page to understand basic requirements for setup of GCS. In the Cloud Store Browser below following buckets are already available. If you select any bucket, you would be able to see the objects created in it.
Here you can see the image file in the 'jda-pd-slo-sandbox.appspot.com' bucket. You won't be able to add/delete files or folder from the browser if you don't have proper access but through code (running with the service account) it should not be a problem.
Objects on GCS are immutable so you can't edit an ob…
So you have read white papers, blogs and even some books on what is Big Data and how it is transforming world by giving us insights into the data usage through advanced analytical strategies. You might also have read about Hadoop and Map/Reduce.
But now what? How do you begin? Theory is not going to cut it, right? You want to get your hands dirty and write some code and setup some clusters, right? Right. So let's start.
Admittedly, Hadoop is intimidating. Apart from having a plethora of software (first of all, you need a Linux box!) you also need to have a 'cluster' of machines because Hadoop running on one machine is not really what a real life Hadoop installation looks like. As a beginner you would like to quickly write a 'Hello World' of Hadoop rather than setting up environment.
Easiest way to start instantly is using Cloudera's Quickstart VM for Hadoop.(Cloudera is one of the three biggest Hadoop distributors). First of all we need to install a virtualiza…