Skip to main content

Using SQL*Loader Utility through E-Business Suite Front End

This brief article explains how to create concurrent programs, in E-Business Suite 11i of Oracle Applications, to use SQL*Loader utility for loading data in backend database tables. I assume that you already have knowledge of using SQL*Loader through command prompt.

As you would be aware of the fact, that the concurrent programs or requests are specific for an application (or responsibility) so you will have to decide the application in which you want to keep your request in, like GL (General Ledger) or AP (Account Payables) or any other application. This is important because you will be able to see your request only when you are logged in this particular application (or responsibility).

Place your Control File in the $APPL_TOP/11.5.0/bin of your application. (e.g. I am keeping my control file in AP and hence the corresponding directory is /u02/apps/visappl/ap/11.5.0/bin. It may be different for your system). This is all you have to do at the backend.

Log in the E-Business Suite in ‘System Administrator’ responsibility.

First we need to define the Executable for the Concurrent Program. Basically this step specifies the Execution Method and Execution File Name our Concurrent Program will be using.

Go Concurrent:Program:Executable

Enter Executable (any name you want to give), Short Name, Application (Select the same application, where you have kept your Control File), Execution Method (Select SQL*Loader), Execution File Name (name of your Control File, without extension. The system locates this file in the path where you kept it, the /bin of the application you specified above). Save the record.

Go Concurrent:Program:Define

OK, time to define the program. Enter Program Name, Short Name, Application (Select the same application, where
you have kept your Control File and which you selected while creating Executable). Enter the Executable (Select the executable you created above). Save the record.

Go Concurrent:Set

Now we have to add our Concurrent Program to a request set. A Request Set clubs many Concurrent Programs together and they are executed according to 'stages'. Each stage is given a 'Display Sequence' which, as the name suggests, controls the execution order. Your set may have only one stage, which means that it has only one Concurrent Program associated with it.

Enter Set name, Set Code, Application (same as selected earlier). Press 'Define Stages' button. Enter Display Sequence (give 1), Stage name and Stage Code. Press 'Requests' button. Now we have to associate the Program name with this stage. Enter Sequence, Program name (same as given to the Concurrent Program defined earlier). Save everything.

And you are done! Switch to the responsibility you defined the request for and submit the request for running your Concurrent Program.

Comments

Popular posts from this blog

How to upload to Google Cloud Storage buckets using CURL

Signed URLs are pretty nifty feature given by Google Cloud Platform to let anyone access your cloud storage (bucket or any file in the bucket) without need to sign in. Official documentation gives step by step details as to how to read/write to the bucket using gsutil or through a program. This article will tell you how to upload a file to the bucket using curl so that any client which doesn't have cloud SDK installed can do this using a simple script. This command creates a signed PUT URL for your bucket. gsutil signurl -c 'text/plain' -m PUT serviceAccount.json gs://test_bucket_location Here is my URL: https://storage.googleapis.com/test_sl?GoogleAccessId=my-project-id@appspot.gserviceaccount.com&Expires=1490266627&Signature=UfKBNHWtjLKSBEcUQUKDeQtSQV6YCleE9hGG%2BCxVEjDOmkDxwkC%2BPtEg63pjDBHyKhVOnhspP1%2FAVSr%2B%2Fty8Ps7MSQ0lM2YHkbPeqjTiUcAfsbdcuXUMbe3p8FysRUFMe2dSikehBJWtbYtjb%2BNCw3L09c7fLFyAoJafIcnoIz7iJGP%2Br6gAUkSnZXgbVjr6wjN%2FIaudXIqA

Running Apache Beam pipeline using Spark Runner on a local standalone Spark Cluster

The best thing about Apache Beam ( B atch + Str eam ) is that multiple runners can be plugged in and same pipeline can be run using Spark, Flink or Google Cloud Dataflow. If you are a beginner like me and want to run a simple pipeline using Spark Runner then whole setup may be tad daunting. Start with Beam's WordCount examples  which help you quickstart with running pipelines using different types of runners. There are code snippets for running the same pipeline using different types of runners but here the code is running on your local system using Spark libraries which is good for testing and debugging pipeline. If you want to run the pipeline on a Spark cluster you need to do a little more work! Let's start by setting up a simple standalone single-node cluster on our local machine. Extending the cluster is as easy as running a command on another machine, which you want to add to cluster. Start with the obvious: install spark on your machine! (Remember to have Java a

java.lang.IllegalArgumentException: Malformed \uxxxx encoding

I was getting this exception during build while running ant. Googling didn't help much and I was flummoxed because the same code was running fine till now. My code reads a text file and does some operations on the basis of values read. It was only when I saw the text files I understood the error. I had copied the text in wordpad and saved it as .txt file. Wordpad had put lot of formatting information before and after the content. Also there was "\par" after every line, which was giving this error. So moral of the story: if you get this exception check your properties file (or any other file that your code might be reading.)