Skip to main content

How to transfer data from Cloud Datastore to Big Query in Google Cloud Platform

If you are here I am assuming that you are looking to migrate the data from Cloud Datastore to Big Query because you want to do some analysis and are frustrated by limitations imposed by GQL (Google Query Language).

First of all you need to create a backup of the data in datastore. Use the Datastore Admin tool provided by Google to take a backup and store it automatically in the Cloud Storage bucket.

Select all the entities and press 'Backup Entities'. Give a backup name, select Google Cloud Storage as backup storage destination and specify a bucket name.




Once the backup job is completed, you will see the backup listed. You can select a backup and press 'Info' and see the details (Entities are masked in the screenshot below).

Go to the bucket mentioned in 'Handle' and you will see the file mentioned above. You will also see many more files with similar names, ending with .backup_info (e.g. ahRzfmpkYS1wZC1zbG8tc2FuZGJveHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhix_-4DDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.JobDetailsEntity.backup_info)

This is the backup file for a specific entity which you will need to specify when creating a table in Big Query.



Head over to Big Query and create a new dataset.


In Location field select 'Google Cloud Storage' and give the location of backup file for the specific entity. File format is 'Cloud Datastore Backup'.

Like the one we found earlier: gs://jda_so__78700310-e2f9-4cf2-8f20-dd325de09a4d_data_bkup/ahRzfmpkYS1wZC1zbG8tc2FuZGJveHJBCxIcX0FFX0RhdGFzdG9yZUFkbWluX09wZXJhdGlvbhix_-4DDAsSFl9BRV9CYWNrdXBfSW5mb3JtYXRpb24YAQw.JobDetailsEntity.backup_info.

Here the bucket name: jda_so__78700310-e2f9-4cf2-8f20-dd325de09a4d_data_bkup is coming from the 'Handle' field in the backup information in the datastore admin and the file name you got in the previous step!

Specify the name of the table you want to create in Big Query in 'Destination' field. Press 'Create Table' and if everything is correct, the job will complete successfully.  Select a table from the left panel and click on Preview to see the data populated. And you are done!

Let me know in comments if you have any questions.


Comments

Popular posts from this blog

How to upload to Google Cloud Storage buckets using CURL

Signed URLs are pretty nifty feature given by Google Cloud Platform to let anyone access your cloud storage (bucket or any file in the bucket) without need to sign in. Official documentation gives step by step details as to how to read/write to the bucket using gsutil or through a program. This article will tell you how to upload a file to the bucket using curl so that any client which doesn't have cloud SDK installed can do this using a simple script. This command creates a signed PUT URL for your bucket. gsutil signurl -c 'text/plain' -m PUT serviceAccount.json gs://test_bucket_location Here is my URL: https://storage.googleapis.com/test_sl?GoogleAccessId=my-project-id@appspot.gserviceaccount.com&Expires=1490266627&Signature=UfKBNHWtjLKSBEcUQUKDeQtSQV6YCleE9hGG%2BCxVEjDOmkDxwkC%2BPtEg63pjDBHyKhVOnhspP1%2FAVSr%2B%2Fty8Ps7MSQ0lM2YHkbPeqjTiUcAfsbdcuXUMbe3p8FysRUFMe2dSikehBJWtbYtjb%2BNCw3L09c7fLFyAoJafIcnoIz7iJGP%2Br6gAUkSnZXgbVjr6wjN%2FIaudXIqA

Running Apache Beam pipeline using Spark Runner on a local standalone Spark Cluster

The best thing about Apache Beam ( B atch + Str eam ) is that multiple runners can be plugged in and same pipeline can be run using Spark, Flink or Google Cloud Dataflow. If you are a beginner like me and want to run a simple pipeline using Spark Runner then whole setup may be tad daunting. Start with Beam's WordCount examples  which help you quickstart with running pipelines using different types of runners. There are code snippets for running the same pipeline using different types of runners but here the code is running on your local system using Spark libraries which is good for testing and debugging pipeline. If you want to run the pipeline on a Spark cluster you need to do a little more work! Let's start by setting up a simple standalone single-node cluster on our local machine. Extending the cluster is as easy as running a command on another machine, which you want to add to cluster. Start with the obvious: install spark on your machine! (Remember to have Java a

java.lang.IllegalArgumentException: Malformed \uxxxx encoding

I was getting this exception during build while running ant. Googling didn't help much and I was flummoxed because the same code was running fine till now. My code reads a text file and does some operations on the basis of values read. It was only when I saw the text files I understood the error. I had copied the text in wordpad and saved it as .txt file. Wordpad had put lot of formatting information before and after the content. Also there was "\par" after every line, which was giving this error. So moral of the story: if you get this exception check your properties file (or any other file that your code might be reading.)