Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with nested loop

Former Member
0 Kudos

Hi,

There is a nested loop in the program which is causing a performance issue. The first internal table loop contains 6700 entries and the second internal table loop contains more than 5,00,000 entries. Can somebody please have a look at the below code and suggest how i can modify it to avoid the performance issue?

LOOP AT lt_zquota_item_skip INTO ls_zquota_item_val.

READ TABLE i_zquota_item2 TRANSPORTING NO FIELDS

WITH KEY product = ls_zquota_item_val-product

source = ls_zquota_item_val-source

destination = ls_zquota_item_val-destination

BINARY SEARCH.

IF sy-subrc IS INITIAL.

v_tabix = sy-tabix.

LOOP AT i_zquota_item2 INTO wa_zquota_item2 FROM v_tabix

WHERE product = ls_zquota_item_val-product

AND source = ls_zquota_item_val-source

AND destination = ls_zquota_item_val-destination.

v_tabx2 = sy-tabix.

APPEND wa_zquota_item2 TO lt_zquota_keep.

DELETE i_zquota_item2 INDEX v_tabx2.

ENDLOOP.

ENDIF.

READ TABLE i_zquota_item_val TRANSPORTING NO FIELDS

WITH KEY product = ls_zquota_item_val-product

source = ls_zquota_item_val-source

destination = ls_zquota_item_val-destination

BINARY SEARCH.

IF sy-subrc IS INITIAL.

v_tabix = sy-tabix.

LOOP AT i_zquota_item_val INTO ls_zquota_item_val FROM v_tabix

WHERE product = ls_zquota_item_val-product

AND source = ls_zquota_item_val-source

AND destination = ls_zquota_item_val-destination.

DELETE i_zquota_item_val INDEX sy-tabix.

ENDLOOP.

ENDIF.

ENDLOOP.

13 REPLIES 13

Former Member
0 Kudos

Hi,

The code does look optimized to the best, so not sure on how much value my below suggestion would provide, check it out anyways


LOOP AT lt_zquota_item_skip INTO ls_zquota_item_val.
    READ TABLE i_zquota_item2 TRANSPORTING NO FIELDS
    WITH KEY product = ls_zquota_item_val-product
    source = ls_zquota_item_val-source
    destination = ls_zquota_item_val-destination
    BINARY SEARCH.
    IF sy-subrc IS INITIAL.
      v_tabix = sy-tabix.
      LOOP AT i_zquota_item2 INTO wa_zquota_item2 FROM v_tabix
      WHERE product = ls_zquota_item_val-product
      AND source = ls_zquota_item_val-source
      AND destination = ls_zquota_item_val-destination.
        v_tabx2 = sy-tabix.
        APPEND wa_zquota_item2 TO lt_zquota_keep.
        DELETE i_zquota_item2 INDEX v_tabx2.
* Check if the next entry satisfies the key search if not exit the loop.
* index will remain the same as you have deleted the previous entry
        READ TABLE i_zquota_item2 INTO wa_zquota_item2 INDEX v_tabx2.
        IF sy-subrc EQ 0.
          IF wa_zquota_item2-product     EQ ls_zquota_item_val-product AND
             wa_zquota_item2-source      EQ ls_zquota_item_val-source AND
             wa_zquota_item2-destination EQ ls_zquota_item_val-destination.
          ELSE.
            EXIT. "Exit loop - Dont have to validate the remaining recs
          ENDIF.
        ENDIF.
      ENDLOOP.
    ENDIF.
    READ TABLE i_zquota_item_val TRANSPORTING NO FIELDS
    WITH KEY product = ls_zquota_item_val-product
    source = ls_zquota_item_val-source
    destination = ls_zquota_item_val-destination
    BINARY SEARCH.
    IF sy-subrc IS INITIAL.
      v_tabix = sy-tabix.
      LOOP AT i_zquota_item_val INTO ls_zquota_item_val FROM v_tabix
      WHERE product = ls_zquota_item_val-product
      AND source = ls_zquota_item_val-source
      AND destination = ls_zquota_item_val-destination.
        DELETE i_zquota_item_val INDEX sy-tabix.
* Similarly for the this loop too, but are you sure you are passing the
* right parameters for the where clause
      ENDLOOP.
    ENDIF.
  ENDLOOP.

Regards,

Chen

Former Member
0 Kudos

Hi Benjamin,

Remove where clause

--WHERE product = ls_zquota_item_val-product

AND source = ls_zquota_item_val-source

AND destination = ls_zquota_item_val-destination--

from loop and exit where there is mismatch in the values for product or source or destination. Also remember to set the value for v_tabix when exiting form the loop.

LOOP AT i_zquota_item2 INTO wa_zquota_item2 FROM v_tabix

if wa_zquota_item2-product <> ls_zquota_item_val-product

OR wa_zquota_item2-source <> ls_zquota_item_val-source

OR wa_zquota_item2-destination <> ls_zquota_item_val-destination.

EXIT.

ENDIF.

...

Regards,

Shyam.

yuri_ziryukin
Employee
Employee
0 Kudos

Shyam has given you the right direction with his post above.

Here is the complete piece of code with the changes:



LOOP AT lt_zquota_item_skip INTO ls_zquota_item_val.

  READ TABLE i_zquota_item2 TRANSPORTING NO FIELDS
    WITH KEY product = ls_zquota_item_val-product
             source = ls_zquota_item_val-source
             destination = ls_zquota_item_val-destination
    BINARY SEARCH.

  IF sy-subrc IS INITIAL.
    v_tabix = sy-tabix.
    LOOP AT i_zquota_item2 INTO wa_zquota_item2 FROM v_tabix.
      if ( wa_zquota_item2-product NE ls_zquota_item_val-product ) OR
         ( wa_zquota_item2-source NE ls_zquota_item_val-source) OR
         ( wa_zquota_item2-destination NE ls_zquota_item_val-destination )
        exit.
      endif.

      APPEND wa_zquota_item2 TO lt_zquota_keep.
      DELETE i_zquota_item2. "index specification not necessary in loop

* v_tabx2 = sy-tabix. "commented out as I don't see the usage reason
    ENDLOOP.
  ENDIF.

  READ TABLE i_zquota_item_val TRANSPORTING NO FIELDS
    WITH KEY product = ls_zquota_item_val-product
             source = ls_zquota_item_val-source
             destination = ls_zquota_item_val-destination
    BINARY SEARCH.

  IF sy-subrc IS INITIAL.
    v_tabix = sy-tabix.
    LOOP AT i_zquota_item_val INTO ls_zquota_item_val FROM v_tabix.
      if ( ls_zquota_item_val-product NE ls_zquota_item_val-product ) OR
         ( ls_zquota_item_val-source NE ls_zquota_item_val-source) OR
         ( ls_zquota_item_val-destination NE ls_zquota_item_val-destination )
        exit.
      endif.
      DELETE i_zquota_item_val. "index specification not necessary in loop
    ENDLOOP.
  ENDIF.

ENDLOOP. 

Edited by: Yuri Ziryukin on Apr 29, 2011 11:23 AM

Edited by: Yuri Ziryukin on Apr 29, 2011 11:25 AM

former_member194613
Active Contributor
0 Kudos

What is the release of your system? If newer than 6.2, then the solution is much much simpler:

Use a sorted table and write a DELETE WHERE .... then you save READ BINARY, LOOP, EXIT it is all done for you.

Without the exit there is of course no optimization at all, the entry is found fast, the improvement is a factor 2.

With exit you move from linear scaling to logarithmic scaling.

All explained here:

Measurements on internal tables: Reads and Loops:

/people/siegfried.boes/blog/2007/09/12/runtimes-of-reads-and-loops-on-internal-tables

Siegfried

0 Kudos

Hello Siegfried,

the idea of the original code is not only to delete entries, but also to collect them into another internal table (see: APPEND wa_zquota_item2 TO lt_zquota_keep.)

It means that he cannot simply use the delete for a sorted table and get rid of all loops, etc. Loop has to be done anyway.

The only thing that can be spared is the IF comparison to exit the loop. If the table is defined as sorted, he can indeed use loop at ... where ... .

But who knows if this table can really be defined as a sorted table. Maybe he uses it in several places of code with the different sort order. We cannot be sure.

Regards,

Yuri

Former Member
0 Kudos

You can use the hased internal table and then instead of loop .... endloop.

You can use do --endo inside that it there will read table and exit..

may solve your problems.

former_member194613
Active Contributor
0 Kudos

Despite reading the full code in detail, I can comment on that

>You can use the hased internal table and then instead of loop .... endloop.

LOOP and hashed table do not really fit together, hashed tables should only be used if there is a unique key and if you address records with the full unique key. That means you work on single records, i.e with READs.

Former Member
0 Kudos

Hi,

Another solution would be to use only sorted tables, which are quite good for solving those kind of issues.

If it's possible, try to define your 2 main tables this way (I'm using DATA, it could of course be some kind of parameter )


DATA: i_zquota_item2 TYPE SORTED TABLE OF ty_s_zquota_item2 
                     WITH NON-UNIQUE KEY product source  destination,
      i_zquota_item_val TYPE SORTED TABLE OF ty_s_zquota_item_val
                     WITH NON-UNIQUE KEY product source destination.

Then, you could simplify your program like that:


  LOOP AT lt_zquota_item_skip INTO ls_zquota_item_val.

    LOOP AT i_zquota_item2 INTO wa_zquota_item2  WHERE product     = ls_zquota_item_val-product
                                                   AND source      = ls_zquota_item_val-source
                                                   AND destination = ls_zquota_item_val-destination.
      APPEND wa_zquota_item2 TO lt_zquota_keep.
    ENDLOOP.

    DELETE i_zquota_item_val
       WHERE product     = ls_zquota_item_val-product
         AND source      = ls_zquota_item_val-source
         AND destination = ls_zquota_item_val-destination.
  ENDLOOP.

paul_bakker2
Active Contributor
0 Kudos

Hi,

Here's an angle that (surprisingly) no one has mentioned yet: doing multiple DELETEs on a large internal table (esp. 5 million entries!) is very expensive.

Rather than doing:

loop at itab

where f1 = <something>

delete itab.

endloop.

Try this:

loop at itab

where f1 <> <something>

append itab to itab_filtered.

endloop.

  • replace the original table with the filtered table

itab[] = itab_filtered[].

.. and your performance problems will disappear like snow on the equator.

cheers

Paul

0 Kudos

Hello Paul,

first of all, performance of delete statement practically DOES NOT depend on the size of the table.

Here is my test:


Statement	Executions		Net time		time per row
Delete LT_INDEX	314.575			343.429			1,091723754
Delete LT_INDEX	2.202.025		2.409.981		1,094438528

First delete is on the table with 316.000 records

Second delete is on the table with around 2.250.000 records.

Time per row is absolutely identical.

Next, your approach is not always applicable. What you advise is actually make fewer loop passes and delete actions. This does not work when:

a) you have to do something with the records left in the table after deletion. And this might be the case here, as we don't see what happens with i_zquota_item2 after the main loop.

b) in your proposal "loop at itab where f1 <something>", this <something> will change. For example, if I want to throw out all values where field A = 'X', after rewriting the code according to your proposal, one will have to make LOOP AT ... WHERE FIELD_A NE 'X'.

As you can imagine, this may destroy the performance improvement if you work with sorted table in the loop.

So, you have to see first if your "magic" improvement is applicable here.

Regards,

Yuri

0 Kudos

Yuri,

Thank you very much for that.

I was shocked to see your test results, so I tried some of my own... and you are right!

I had discovered quite a few years ago that DELETE was an expensive operation, and I have been using that assumption ever since. In my notes, I even recorded a quote from SAP Help at the time:

"Any operations that insert into, or delete from, internal tables come at a certain cost because the index has to be updated. The amount of time needed for this increases proportionally according to the number of lines that remain after the insert or delete position. Any changes you make at the beginning of the table thus require more runtime than changes at the end of the table.

The line width of the table is of no consequence in this context"

Is this still true? No. I can no longer find this quote in SAP Help, and the DELETE command no longer seems to have this overhead.

Perhaps it was a recent improvement - we are on ECC 6.0.

cheers

Paul

0 Kudos

Hi Paul,

> "Any operations that insert into, or delete from, internal tables come at a certain cost because the index has to be updated. The amount of time needed for this increases proportionally according to the number of lines that remain after the insert or delete position. Any changes you make at the beginning of the table thus require more runtime than changes at the end of the table.

> The line width of the table is of no consequence in this context"

> Is this still true? No. I can no longer find this quote in SAP Help, and the DELETE command no longer seems to have this overhead.

>

it IS still true, but not very important any more. What is described above is a linear index. This index is only used for small tables up to a certain size. Then the system switches to a tree-like index. Once we have that, the copy costs in the index only occur at leaf level.

That's why it is not very important anymore. Im not sure when this tree like index was introduced. The quote from above sounds from times before we had that tree like index structure.

You can finde some description about it here:

http://www.sap-press.de/download/dateien/1880/sappress_abap_performance_tuning.pdf

Kind regards,

Hermann

0 Kudos

Thanks Herman, that clears it up.

I have your book here - I should read it more carefully!

regards

Paul