on 05-26-2016 12:53 AM
Hi I'm trying to extract a large table from a SAP system,
it times out, in the past we have set rdisp/max_wprun_time
we have just move to a HANA back end.
We have also tried setting rdisp/schedule/prio_normal/max_runtime
any ideas, aways the ABAP part times out exactly 1 HR regardless of the settings of these rz11 parameters.
\\\\14.2) 05-25-16 16:57:26 (E) (49197:181131040) R3C-150412: |Data flow SE_ERP_Extraction_ADRC_df
RFC CallReceive error <Function /SAPDS/ABAP_RUN: RFC_ABAP_MESSAGE- Time limit exceeded[SAP NWRFC 720][SAP Partner 740
][RP3][saprp3][BOSRVUSER][4102]>.
(14.2) 05-25-16 16:57:29 (E) (49168:744113952) R3C-150412: |Data flow SE_ERP_Extraction_ADRC_df
RFC CallReceive error <Function /SAPDS/ABAP_RUN: RFC_ABAP_MESSAGE- Time limit exceeded[SAP NWRFC 720][SAP Partner 740
][RP3][saprp3][BOSRVUSER][4102]>.
(14.2) 05-25-16 15:57:26 (49197:181131040) ABAP: Begin executing ABAP program <ZSER_ADRC>.
(14.2) 05-25-16 16:57:29 (49168:744113952) JOB: Job <SE_ERP_tmp> is terminated due to error <150412>.
My apols.
The system guys incorrectly rdisp/max_wprun_time to 3600 instead of 7200 this allowed ADRC to work, (7200 setting we used in the old SAP ERP system seem to work the same with HANA )
So ERP and HANA works the same (for me) - in fact the ABAP flow execution time is about the same.
note, setting rdisp/schedule/prio_normal/max_runtime is a red herring in regards to DS - ignore this.
I will investigate what is required to run in background , array fetch and partitioning as suggested as a long term solution. So may have some more questions down the track.
Thanks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Create an ABAP data flow for extracting data from Huge tables, but this will only work if your Source is SAP ERP Application table and not SAP ERP based HANA schema.
Regards
Arun Sasi
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hey Mike,
Refer to below excerpt from SAP Supplement Guide
4.3 Reading from SAP tables
You can use a regular data flow to process large volumes of data from SAP tables.
To improve the performance while reading large volumes of data, the source table editor includes the following options:●
The Enable partitioning option allows the software to read R/3 table data using the number of partitions in the table as the maximum number of parallel instances. The Partition type option in the Partition tab on the Properties window for a table must be set to Range or List in order to use the Enable partitioning option.
The Array fetch size option allows the data to be sent in chunks, which avoids large caches on the source side.
The Execute in background (batch) option lets you run the SAP table reader in batch mode (using a background work process) for time-consuming transactions.
If you are trying to extract SAP tables which contain huge data, then extract only necessary information rather than pulling the entire SAP Table data. There must be some input criteria(SAP ERP Term) when you extract the data.
For e.g. If you are extracting ADRC table then join it with some other related tables like ADR6 to filter the data set.
ADRC - Addresses (Business Address Services) - SAP table
I am not sure about SAP HANA as a backend. You can refer to below post which might be helpful
https://blogs.saphana.com/2013/04/07/best-practices-for-sap-hana-data-loads/
Regards
Arun Sasi
Regards
Arun Sasi
User | Count |
---|---|
91 | |
10 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.