Skip to main content

Example of Azure Event Trigger Function implemented in Java

There are so few examples of Azure Functions implemented in Java and even official documentation is a little confusing, so I decided to write my own post explaining some of the points.

So here is my function class:
public class EventHubTriggerFunction {
/**
* This function will be invoked when an event is received from Event Hub.
*/
@FunctionName("EventHubTrigger-Java")
public void run(
@EventHubTrigger(name = "message",
eventHubName = "my-event-hubs",
connection = "EventHubConnectionString",
consumerGroup = "$Default"
) String message,
final ExecutionContext context
) {
context.getLogger().info("Java Event Hub trigger function executed.");
context.getLogger().info("Message:" + message);
}
}
The noteworthy thing in this snippet are the annotation attributes.
name: Any name that can be given to the trigger.
eventHubName: Name of the event hubs instance that will be used to receieve messages.
connection: Name of the property which stores Event Hubs Namespace (not the Event Hubs instance) connection string. What does this mean? It means that we don't have to give the connection string here, but define the conection string in local.settings.json (when running locally) or in Configuration page in the portal.
Here is the local.settings.json file. 'EventHubConnectionString' is the key that has been given in the connection attribute above. 
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"FUNCTIONS_WORKER_RUNTIME": "java",
"EventHubConnectionString": "Endpoint=sb://eventhubs.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=accesskeystring"
}
} 
This same key/value has to be added in the function app configuration on the portal:


By default the EventHub Trigger assumes that the message is in JSON format and it will give following error: 

Exception while executing function: Functions.EventHubExample. Microsoft.Azure.WebJobs.Host: Exception binding parameter message. 
Microsoft.Azure.WebJobs.Host: Binding parameters to complex objects (such as Object) uses JSON.NET serialization.
To get rid of this, either the message should be a valid JSON or we need to specify the 'dataType' parameter for the trigger:
public void run(
@EventHubTrigger(name = "message",
eventHubName = "my-event-hubs",
connection = "EventHubConnectionString",
consumerGroup = "$Default",
dataType = "string"
) String message

The behavior of trigger is controlled by certain attributes. The various runtime attributes and their default values are mentioned here. We can override these values in our host.json file:
{
"version": "2.0",
"extensions": {
"eventHubs": {
"batchCheckpointFrequency": 5,
"eventProcessorOptions": {
"maxBatchSize": 1,
"prefetchCount": 512
},
"initialOffsetOptions": {
"type": "fromStart",
"enqueuedTimeUtc": ""
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[2.*, 3.0.0)"
}
}
I think this is all there is to implement and run a simple event hubs trigger function in Java.
Source: Github

Comments

Popular posts from this blog

How to upload to Google Cloud Storage buckets using CURL

Signed URLs are pretty nifty feature given by Google Cloud Platform to let anyone access your cloud storage (bucket or any file in the bucket) without need to sign in. Official documentation gives step by step details as to how to read/write to the bucket using gsutil or through a program. This article will tell you how to upload a file to the bucket using curl so that any client which doesn't have cloud SDK installed can do this using a simple script. This command creates a signed PUT URL for your bucket. gsutil signurl -c 'text/plain' -m PUT serviceAccount.json gs://test_bucket_location Here is my URL: https://storage.googleapis.com/test_sl?GoogleAccessId=my-project-id@appspot.gserviceaccount.com&Expires=1490266627&Signature=UfKBNHWtjLKSBEcUQUKDeQtSQV6YCleE9hGG%2BCxVEjDOmkDxwkC%2BPtEg63pjDBHyKhVOnhspP1%2FAVSr%2B%2Fty8Ps7MSQ0lM2YHkbPeqjTiUcAfsbdcuXUMbe3p8FysRUFMe2dSikehBJWtbYtjb%2BNCw3L09c7fLFyAoJafIcnoIz7iJGP%2Br6gAUkSnZXgbVjr6wjN%2FIaudXIqA...

Running Apache Beam pipeline using Spark Runner on a local standalone Spark Cluster

The best thing about Apache Beam ( B atch + Str eam ) is that multiple runners can be plugged in and same pipeline can be run using Spark, Flink or Google Cloud Dataflow. If you are a beginner like me and want to run a simple pipeline using Spark Runner then whole setup may be tad daunting. Start with Beam's WordCount examples  which help you quickstart with running pipelines using different types of runners. There are code snippets for running the same pipeline using different types of runners but here the code is running on your local system using Spark libraries which is good for testing and debugging pipeline. If you want to run the pipeline on a Spark cluster you need to do a little more work! Let's start by setting up a simple standalone single-node cluster on our local machine. Extending the cluster is as easy as running a command on another machine, which you want to add to cluster. Start with the obvious: install spark on your machine! (Remember to have Java a...

Changing Eclipse Workspace Directory

Recently I moved my entire Eclipse installation directory but the workspace was still getting created in the older location only. And worst there was no option to select the Workspace directory in the Window->Options->Workspace menu. To change the workspace location in Eclipse do this. Goto ECLIPSE_HOME\configuration\.settings directory, edit the org.eclipse.ui.ide.prefs file and change the RECENT_WORKSPACES value to the desired location. If you want that Eclipse prompts you to select workspace when you start it, change the SHOW_WORKSPACE_SELECTION_DIALOG value to true. And you are done!