cancel
Showing results for 
Search instead for 
Did you mean: 

Avro Kafka Streaming with HANA SDS

0 Kudos

Hi All,

I am trying to explore how to use Avro Kafka Input adapter with HANA SDS.

Scenario.

We are moving towards service architecture and Kafka is used to communicate between system. As part of Analytics, we would like to stream the data into BI system and perform some analysis.

Current progress.

I have installed HANA express edition with Streaming server. Streaming service is up and running and tried the example from SAP (Stream Door data into HANA table). This works seamlessly.

Now, i would like to configure AVRO Kafka input adapter.

Can someone help me with this.

1. I have Kafka producer configured in my laptop and HANA express is running in VMware. So, can I produce message in laptop and send it to HANA table ? What should be BOOTSTRAP server (<ip-address>:9092)?

2. Where should the Avro JSON schema file be placed ?

PS: I am quite naive in Kafka technology.

Thanks and best regards,

Vikram

Accepted Solutions (0)

Answers (1)

Answers (1)

RobertWaywell
Product and Topic Expert
Product and Topic Expert
0 Kudos

Here is a link to a Kafka Quick Start tutorial from apache.org. This will help you become familiar with the Kafka components and how it works independently from HANA streaming analytics:

Apache Kafka - Quickstart

At a high level, Kafka is a message queuing system that brokers connections between 1 or more producers and 1 or more consumers. Both producers and consumers connect to the Kafka server and either publish data into a Kafka topic ( or queue) or subscribe to data from a Kafka topic. Producers and consumers do not communicate directly with each other. With other message queuing systems you will see the terms "publisher" and "subscriber" which are equivalent to the components that Kafka labels "producer" and "consumer".

From your post, it sounds like you may not have set up a Kafka server yet. If that is the case, you will need to set up a Kafka server. The server could be on a Linux or Windows host, but wherever it is located you will need network access from both the producer running on your host laptop and the Streaming server running in the VM. Your producer will connect to your Kafka server and publish Avro records into a topic/queue that you create on the Kafka server. In the HANA streaming analytics project you will use the Kafka Avro Record Input Adapter to connect to the same Kafka server and topic to read or consume the Avro records from the topic/queue.


0 Kudos

Hi Robert,

Thanks for the reply.

I have Kafka server installed and running. I have created a topic "TEST1".

Also the producer with topic "TEST1" is created.

I have a command line consumer and it could listen to data. It also output the data on command line.

Now i want to record this data in HANA table.

This is the reason the streaming server was installed with HANA express edition.

I have AVRO json file test_Schema.json

{ "type": "record",

"namespace": "com.example",

"name": "Customer",

"doc": "Avro Schema for our Customer",

"fields": [{

"name": "first_name",

"type": "string",

"doc": "First Name of Customer" } ] }

Where do I have to place this file ?

Also I am not sure what needs to be entered in here.

Can you please help me with this.

Thanks and best regards,

Vikram

RobertWaywell
Product and Topic Expert
Product and Topic Expert
0 Kudos

Have you looked at the documentation on configuring the Kafka Avro Record Input adapter?

Kafka Avro Record Input Adapter: Managed Mode Properties

There is also an example installed along with the server:

Adapter Examples

Kafka Avro Record InputSTREAMING_HOME/adapters/framework/instances/kafka_avro_input