cancel
Showing results for 
Search instead for 
Did you mean: 

SAP DS ABAP RFC_ABAP_MESSAGE- Time limit exceeded

former_member212650
Participant
0 Kudos

Hi I'm trying to extract a large table from a SAP system,

it times out, in the past we have set rdisp/max_wprun_time

we have just move to a HANA back end.

We have also tried setting rdisp/schedule/prio_normal/max_runtime

any ideas, aways the ABAP part times out exactly 1 HR regardless of the settings of these rz11 parameters.






\\\\14.2) 05-25-16 16:57:26 (E) (49197:181131040) R3C-150412: |Data flow SE_ERP_Extraction_ADRC_df

RFC CallReceive error <Function /SAPDS/ABAP_RUN: RFC_ABAP_MESSAGE- Time limit exceeded[SAP NWRFC 720][SAP Partner 740

][RP3][saprp3][BOSRVUSER][4102]>.

(14.2) 05-25-16 16:57:29 (E) (49168:744113952) R3C-150412: |Data flow SE_ERP_Extraction_ADRC_df

RFC CallReceive error <Function /SAPDS/ABAP_RUN: RFC_ABAP_MESSAGE- Time limit exceeded[SAP NWRFC 720][SAP Partner 740

][RP3][saprp3][BOSRVUSER][4102]>.


(14.2) 05-25-16 15:57:26 (49197:181131040)     ABAP: Begin executing ABAP program <ZSER_ADRC>.

(14.2) 05-25-16 16:57:29 (49168:744113952)      JOB: Job <SE_ERP_tmp> is terminated due to error <150412>.

Accepted Solutions (1)

Accepted Solutions (1)

former_member212650
Participant

My apols.

The system guys incorrectly rdisp/max_wprun_time to 3600 instead of 7200 this  allowed ADRC to work, (7200 setting we used in the old SAP ERP system seem to work the same with HANA )

So ERP and HANA works the same (for me) - in fact the ABAP flow execution time is about the same.

note, setting rdisp/schedule/prio_normal/max_runtime is a red herring in regards to DS  - ignore this.

I will investigate  what is required to run in background , array fetch and partitioning as suggested as a long term solution. So may have some more questions down the track.

Thanks.


Answers (1)

Answers (1)

former_member198401
Active Contributor
0 Kudos

Create an ABAP data flow for extracting data from Huge tables, but this will only work if your Source is SAP ERP Application table and not SAP ERP based HANA schema.

Regards

Arun Sasi

former_member212650
Participant
0 Kudos

Thanks for you answer, why do you say that with SAP HANA you can't extract huge tables?

Any particular reason?

Also what are the suggested workarounds?
thanks

mike

former_member198401
Active Contributor
0 Kudos

Hey Mike,

Refer to below excerpt from SAP Supplement Guide

4.3 Reading from SAP tables

You can use a regular data flow to process large volumes of data from SAP tables.

To improve the performance while reading large volumes of data, the source table editor includes the following options:●

The Enable partitioning option allows the software to read R/3 table data using the number of partitions in the table as the maximum number of parallel instances. The Partition type option in the Partition tab on the Properties window for a table must be set to Range or List in order to use the Enable partitioning option.

The Array fetch size option allows the data to be sent in chunks, which avoids large caches on the source side.

The Execute in background (batch) option lets you run the SAP table reader in batch mode (using a background work process) for time-consuming transactions.

If you are trying to extract SAP tables which contain huge data, then extract only necessary information rather than pulling the entire SAP Table data. There must be some input criteria(SAP ERP Term) when you extract the data.

For e.g. If you are extracting ADRC table then join it with some other related tables like ADR6 to filter the data set.

ADRC - Addresses (Business Address Services) - SAP table

I am not sure about SAP HANA as a backend. You can refer to below post which might be helpful

https://blogs.saphana.com/2013/04/07/best-practices-for-sap-hana-data-loads/

Can you please advise

Regards

Arun Sasi

Regards

Arun Sasi

former_member187605
Active Contributor
0 Kudos

Please refer to my answer to That covers the extraction possibilities from a datastore of type applications. Another option would be to define a 2nd datastore, of type database, on the same source and extract directly from the underlying HANA tables. But I would only recommend this for Z-tables.