Skip to Content

Load SAP data via HANA sidecar using SQOOP

Hi folks,

Our HADOOP team is wanting to load a vast amount of SAP tables from HANA sidecar (loaded into HANA sidecar from SAP source system using SLT) into HADOOP system using SQOOP jobs. Initially we were going to model some views over these tables as a proof of concept and thus control how many columns from each table they could use and therefore control how much memory is consumed. Now they are wanting to avoid modelling all together and copy entire tables including all columns for each table and move to HADOOP which would be a huge amount of memory consumed. In addition there is no delta mechanism to get the data from sidecar HANA into HADOOP so they would be doing a lot of daily FULL LOADS of data in many cases.

I'm wondering how many of you are doing this sort of scenario? What are other companies doing that have both sidecar HANA and HADOOP? Are they treating their HADOOP environment like a Business Objects platform with giant universes setup with all data available to be consumed? Or do you expose data feeds to HADOOP on a case by case basis? Are you moving entire tables or modeled views with limited columns?



Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

0 Answers