Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Performance Tuning for a report

Former Member
0 Kudos

Hi,

We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.

To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.

The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.

select matnr from mara into table t_mara.

select werks from t001w into corresponding fields of table t_t001w .

select matnr werks from marc into corresponding fields of table t_marc.

loop at str_table into wa_table.

 if not wa_table-partnumber is initial.

 CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
   EXPORTING
     INPUT         =  wa_table-partnumber

     IMPORTING
    OUTPUT        = wa_table-partnumber
           .
 endif.

 clear wa_message.

read table t_mara into wa_mara with key matnr = wa_table-partnumber.

if sy-subrc is not initial.

concatenate 'material ' wa_table-partnumber ' doesnot exists'
into wa_message.

append wa_message to t_message.

endif.

 read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.

  if sy-subrc is not initial.

  concatenate 'plant ' wa_table-HostLocID  ' doesnot exists' into
  wa_message.
  append wa_message to t_message.
  else.

  case wa_t001w-werks.
when 'DE40'
 or  'DE42'
 or  'DE44'
 or  'CN61'
 or  'US62'
 or  'SG70'
 or  'FI40'
.
read table t_marc into wa_marc with key matnr = wa_table-partnumber
                                        werks = wa_table-HostLocID.
if sy-subrc is not initial.

concatenate 'material' wa_table-partnumber  ' not extended to plant'
wa_table-HostLocID  into  wa_message.
append wa_message to t_message.

endif.

when others.

concatenate 'plant ' wa_table-HostLocID ' not allowed'
  into wa_message.

append wa_message to t_message.
 endcase.
 endif.

    if wa_message is initial.

      data: wa_headdata type BAPIMATHEAD,
      wa_PLANTDATA type BAPI_MARC,
      wa_PLANTDATAx type BAPI_MARCX.

      wa_headdata-MATERIAL = wa_table-PartNumber.
      wa_PLANTDATA-plant = wa_table-HostLocID.
      wa_PLANTDATAX-plant = wa_table-HostLocID.

      wa_PLANTDATA-REORDER_PT = wa_table-ROP.
      wa_PLANTDATAX-REORDER_PT = 'X'.

      wa_plantdata-ROUND_VAL = wa_table-EOQ.
      wa_plantdatax-round_val =  'X'.

      CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
        EXPORTING
          HEADDATA                   = wa_headdata
         PLANTDATA                  = wa_PLANTDATA
         PLANTDATAX                 = wa_PLANTDATAX

      IMPORTING
         RETURN                     =  t_bapiret
                .
      CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
                .
write t_bapiret-message.
endif.

clear: wa_mara, wa_t001w, wa_marc.
endloop.

loop at t_message into wa_message.
write wa_message.
endloop.

Thanks in advance.

Peter

Edited by: kishan P on Sep 17, 2010 4:50 PM

12 REPLIES 12

Former Member
0 Kudos

Hi,

try this way...


 loop at str_table into wa_table.

"move all the material in to range or one internal table.....

Endloop.
 

"next fetch all mara & marc data into one table using for all entries of 
 *select matnr from mara into table t_mara.
  *select matnr werks from marc into corresponding fields of table t_marc.
select *mara *marc data into internal table 
for all entries of above internal table of input file
where matnr = <inputfile-matnr>.

 select werks from t001w into corresponding fields of table t_t001w .
 

BAPI BAPI_MATERIAL_SAVEDATA.   "call BAPI_MATERIAL_SAVEREPLiCA for every new material or at last

Prabhudas

0 Kudos

Thanks Prabhu.

But we need to have the BAPI in a loop as the bapi header (HEADDATA LIKE BAPIMATHEAD) can take only one line at a time.

Please help.

Peter

0 Kudos

Hi try,



 loop at str_table into wa_table.
 
"move all the material in to range or one internal table.....
 
Endloop.

select *mara *marc data into internal table <Final table>
for all entries of above internal table of input file
where matnr = <inputfile-matnr>.
 
 select werks from t001w into corresponding fields of table t_t001w .

"do it for distinct materials and Plant combination
    LOOP AT t_final INTO wa_final.

         loop at str_table into wa_table where matnr = wa_final-matnr and plant = wa_final-werks.
          "  write conditon and append to bapi save table to respect structures
         ENDloop.
      
 *     at end of material
      AT END OF matnr.
"       Move header data
        CALL FUNCTION 'BAPI_MATERIAL_SAVEREPLICA'

   ENDLOOP.

Prabhudas

Former Member
0 Kudos

The "BAPI_MATERIAL_SAVEREPLICA" takes multiple materials. But for this amount of data, only multiple parallel tasks (best number depends on your system) will get you much faster. One way would be to split your input file and run a separate instance for each one of them, or use something like "SPTA_PARA_PROCESS_START_2" (see demo report SPTA_PARA_DEMO_1).

Edit: Ohh, and you should use a "HASHED TABLE" or at least "BINARY SEARCH" for "t_mara" and "t_marc".

Edited by: Carsten Grafflage on Sep 17, 2010 1:50 PM

0 Kudos

Carsten, thanks for the response.

Can you explain in more detail.

Regards,

Peter

Former Member
0 Kudos

Hi Peter Grosvenor,

Adding BINARY SEARCH will increase the performance for sure. Recently we had a performance issue in our system for a report which was looping on a table with 400,000 records and reading a record from a table with 100,000 records and I was not using BINARY SEARCH. After using BINARY SEARCH the execution time reduced to 40%.

Note : Sort your table on the key before using BINARY SEARCH.

I have seen some suggestion for "FOR ALL ENTRIES". As far as my opinion on "FOR ALL ENTRIES" is Never use it when you process large data(as you are processing huge records here). It will kill the performance.

Thanks & Regards,

Rock.

Former Member
0 Kudos

@Peter: For basics about the indexed tables you should take a look at the following example chapter from Herman's book "ABAP Performance Tuning": http://www.sap-press.de/download/dateien/1880/sappress_abap_performance_tuning.pdf

As for the parallel processes, a look at the example program (which has good source comments), would be the best i think. I'll try to answer any specific question.

Former Member
0 Kudos

Hi Peter,

A few more suggestions...

1. Define T_MARA and T_MARC as HASHED internal tables, with unique keys of MATNR and MATNR+WERKS, respectively.

2. When you read these tables, adjust your READ statements to say WITH TABLE KEY - this forces the HASHED table lookup.

3. In your T_T001W and T_MARC logic SELECTS, ensure that the structure of these tables ONLY includes the fields you need (ie. WERKS and MATNR+WERKS, respectively) and GET RID OF the CORRESPONDING FIELDS clause in those SELECT statements.

4. Define a field symbol <WA_TABLE> and structure it like your current WA_TABLE variable. Change your LOOP AT STR_TABLE by removing the INTO WA_TABLE clause and replacing it with ASSIGNING <WA_TABLE>. Change all other references (within the loop) of WA_TABLE to use <WA_TABLE> instead.

This should shave a fair bit of time off the execution time; can't vouch for how much time is currently being taken by the BAPI(s).

Good luck, Andy

Edited by: abapandy on Sep 18, 2010 3:23 AM

former_member193284
Active Participant
0 Kudos

Hi Peter,

I would suggest few changes in your code. Please refer below procedure to optimize the code.

Steps:

Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.

Please run extended program check or code inspector to remove any errors and warnings.

Few code changes that i would suggest in your code

For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)

Also put an initial check if str_table[] is not initial before you execute the loop.

where ever you have used read table. Please sort these tables and use binary search.

Please clear the work areas after every append statment.

As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.

Hope this helps to resolve your problem.

Have a nice day

Thanks

Clemenss
Active Contributor
0 Kudos

Hi Peter,

some hints have been given, most valuable the use of hashed table. Further improvement is possible using field-symbols, not work areas; i.e.

field-symols:
  <marc> like line of t_marc.
...
read table t_marc assigning <marc>

But as you do not even use the data, it should be (using a hashed table t_marc)

loop at t_marc transporting no fields
  where matnr = wa_table-partnumber
    and werks = wa_table-HostLocID.
endloop.
if sy-subrc NE 0.
...

But the most performing-consumtion is the BAPI_TRANSACTION_COMMIT after each call of FUNCTION 'BAPI_MATERIAL_SAVEDATA'.

It is OK to do this approximately every 1000 or even more calls - you will see this helps most.

Regards,

Clemens

former_member194613
Active Contributor
0 Kudos

you should be aware at least of the very basics:

You READs need any kind of key support if you use very very large tables (large starts with more than 50 entries)

Check this and use sorted or hashed tables

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables

Of course a much better starting point for an optimization is an ABAP Trace

SE30

/people/siegfried.boes/blog/2007/11/13/the-abap-runtime-trace-se30--quick-and-easy

It always makes sense to look into a SQL trace

SQL trace:

/people/siegfried.boes/blog/2007/09/05/the-sql-trace-st05-150-quick-and-easy

Siegfried

Former Member
0 Kudos

Hello Peter,

In your original post code, I could see BAPI_TRANSACTION_COMMIT inside the loop, If u keep the commit statement outside of the loop u can have tremendous improvement over ur code.

Still if u would like commit inside the loop, set up a counter say around for 500/1000 loop passes u can execute the commit work, It will give u considerable performance improvement.

Let me know if u require further inputs for the same.

Regards,

Bysani.