Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Problem with nested loops with huge data

Former Member
0 Kudos

Hi Developing experts,

Here I have a problem with nested loops, hopefully I will get a solutin for it here.

My situation is as below.



LOOP AT ........ITAB1 <------------- with 2000 records

    LOOP AT ........ITAB2 <------------- with 600 records

      LOOP AT ........ITAB3 <------------- with 1,60,000 records

       LOOP AT ........ITAB4 <------------- with 30,000 records

       ENDLOOP.

     ENDLOOP.

   ENDLOOP.

ENDLOOP.

I am getting one dump afetr executing this.

" No more storage space available for extending an internal table"

This is my dump. Please suggest me how can I manage this.

Thanks in advance

G.S.Naidu

21 REPLIES 21

Former Member
0 Kudos

Hi

Put where condition for the loops which are inside the loops.

Former Member
0 Kudos

Hi Naidu,

Definitely this is a HUGE problem ..... that too Nesting Level is 4

Possible Solution

Do Parallel Cursor on 2 - 2 internal tables and consolidate in another internal tables Xand Y ......

and finally do another parallel cursor between X and Y .....

Like .....

Sort itabA by f1 f2..

Sort itabB by f2 f3.

Sort itabC by f1 f2..

Sort itabD by f2 f3.

Loop at itabA.

read table itabB into wa_b with key f1 = itabA-f1.

if sy-subrc = 0.

loop at itabB.

<logic>

Consolidate into itabX

<If Exit Condition>

EXIT.

endif.

endloop.

endif.

endloop.

Loop at itabC.

read table itabD into wa_b with key f1 = itabC-f1.

if sy-subrc = 0.

loop at itabB.

<logic>

Consolidate into itabY

<If Exit Condition>

EXIT.

endif.

endloop.

endif.

endloop.

Loop at itabX.

read table itabY into wa_b with key f1 = itabX-f1.

if sy-subrc = 0.

loop at itabB.

<logic>

Consolidate into itabY

<If Exit Condition>

EXIT.

endif.

endloop.

endif.

endloop.

Hope this helps u ......

Cheers,

Kripa Rangachari...

0 Kudos

Hi Kripa,

Thanki you for the immediate response.

I have used the parallel cursur method. But still I am getting the same dump but with in less time its going in to the dump and saying the same error. "No more storage space available for extending an internal table."

Since It is a problem of internal table memory can we increase the ITAB memory in any way?

Thanks in advance

G.S.Naidu

0 Kudos

hi Naidu,

We have to opt the solution of memory and other harware stuffs at the end only when we are sure that we dont have any other optimization techniques....

I guess the Nesting levels should be avoided and usage of binary search, index, where clause and parallel cursor between 2 loops! ... will increase the perfocmance ...

may be i STRONGLY suggest to first opt for code optimization and then we can have a look into other possiblie solutions ...

Hope this helps..

Cheers,

Kripa Rangachari...

0 Kudos

Hi Kripa,

Yes you are right. But there is no other option for us.

As per the select query in the ITAB there are 1,60,000 records and in other ITAB 30,000 records are there.

we can't reduce these records and can't change the select query as its mandatory requirement.

Since I have used parallel cursur method and read statements and binary searches my performance got increased but to insert the data in to the ITAB there is no space afterwords . Thats why with less time I am getting dump. But before I was getting dump afetr 10minutes. Now after using all these performance increasing formulas I made the dump to come with in 2minutes..

So I strongly believe that we have to increate the ITAB's memory only. Please suggest in this area.

Thanks & Regards

G.S.Naidu

0 Kudos

It is indeed a memory problem and not a performance problem.

If you can't process the data in separate blocks (per block select less data) then the only solution is increasing memory.

Former Member
0 Kudos

Hi

Use READ with BINARY SEARCH statement instead of multiple loops. This will reduce the execution time.

Regards,

Sridhar

Former Member
0 Kudos

Hi,

First sort all the tables.

first use the loop for inernal table 1,

read table itab2 with binary search

rad table 3 itab3 with binary search,

read table itab4 with binary search,

endloop.

rohit_kaikala
Participant
0 Kudos

Hi,

Use the loopings separately.

First sort all the internal tables with binary search,

After that use first 2 loops get the data into a internal table then the next internal table in the same way.

Regards,

Rohith.

former_member182485
Active Contributor
0 Kudos

Hi,

At any situation do not use more then 2 level of nesting of loop.

Just try with AT NEW , AT END etc or READ TABLE statments this will solve your problem

Regards

Bikas

Former Member
0 Kudos

Hi,

In addition to the WHERE clause for inner tables, sort the inner tables accordingly and apply BINARY search for every inner table to reach the index of first record statisfying WHERE clause. Use that INDEX as starting point while looping through the inner tables.

Regards,

V Joshi

Former Member
0 Kudos

Hi.

Use Parallel Cursor Technique..

for better understanding plaese go through the beloe link.

http://wiki.sdn.sap.com/wiki/display/Community/AlternativesforNested+Loops

hope this will help you..

Regards,

Kiran

Subhankar
Active Contributor
0 Kudos

Hi Naidu,

Your main problem is storage space of a internal table ." " No more storage space available for extending an internal table"

For this one parallel cursor is not helpful. it is help to make the processing fast.

As per my understanding from four ITAB you are building one final internal table. In your final table if you have any duplicate logic or you know that for some fields table data will be unique, then you can make table as hash table. Here instead of append you need to do insert.

If there is no such logic or any other way to make table size low, you can go for the application server file or custom table. Insert the records to the temporary app server file or to custom table.

Thanks

Subhankar

matt
Active Contributor
0 Kudos

You've got four internal tables with data already... so where is the amount of data increasing? What are you doing inside the loops? This is the information we need.

Incidentally - I just changed an old program from using BINARY SEARCH to SORTED and HASHED tables and achieved a 60x performance improvement. I frankly find it depressing that BINARY SEARCH is still being promoted as the solution to all performance issues. BINARY SEARCH is old technology Let's move on from the 1990s, people!

0 Kudos

>

> Incidentally - I just changed an old program from using BINARY SEARCH to SORTED and HASHED tables and achieved a 60x performance improvement.

Hey Matt,

60X performance improvement is pretty cool !! I'm into the habit of using SORTED tables but still don't think HASHED tables are my cup of tea.

If you could share any specific reason why you used HASHED tables ?

Thanks,

Suhas

matt
Active Contributor
0 Kudos

Let's take the example of the previous post:

DATA: itab2 TYPE HASHED TABLE OF desc WITH UNIQUE KEY key.   
  LOOP AT itab1.
    READ TABLE itab2
      WITH TABLE KEY key = itab1-field.
    CHECK sy-subrc = 0.
    LOOP AT itab2 FROM sy-tabix.
      IF itab2-key NE itab1-field.
        EXIT.
      ENDIF.
        "process data from itab2 here
    ENDLOOP.
  ENDLOOP.

So, whenever you're doing a lookup, and you've got a unique key, use a HASHED table.

Former Member
0 Kudos

HI,

Have you tried to use ABAP statement FREE itabx?

It will not only clear/refresh the internal table but also release the memory

occupied by it.

Former Member
0 Kudos

Hi!

Removing a LOOP AT ... WHERE statement is pretty simple, you have to do the following:



  SORT itab2 BY key.   "sorting before LOOP is very important
  LOOP AT itab1.
    READ TABLE itab2
      WITH KEY key = itab1-field
      BINARY SEARCH.
    IF sy-subrc = 0.
      LOOP AT itab2 FROM sy-tabix.
        IF itab2-key = itab1-field.
          "process data from itab2 here
        ELSE.
          EXIT.
        ENDIF.
      ENDLOOP.
    ENDIF.
  ENDLOOP.

This is a very useful and pretty quick solution.

Regards

Tamá

Former Member
0 Kudos

If you have where condition in your inner loops then use sorted tables with key

data itab1 TYPE sorted TABLE OF <structure> with unique key KEY1 KEY2.

data itab2 TYPE sorted TABLE OF <structure> with unique key KEY1 KEY2.

loop at itab1 assigning <fs1>. "use field symbol instead of work area

loop at itab2 assigning <fs2> where <fs2>-key1 = <fs1>-key1.

endloop.

endloop.

this will improve performance by a big margin if you have similar scenario.

Regards

Vishal Kapoor

ThomasZloch
Active Contributor
0 Kudos

Pretty big thread, allow me to summarize from my point of view:

- forget about parallel cursor technique and "binary search", use sorted or hashed internal tables instead

- make sure you loop at the key fields of these sorted or hashed tables in the nested loops

- this will be even more fun when secondary indexes for internal tables are available (Rel. 7.02)

- your problem is due to memory consumption, not due to the nested loops

- try using block processing and use FREE statement whenever possible to discard temporary data that is no longer needed

Thomas

former_member194613
Active Contributor
0 Kudos

The very problem which should be understood, is the difference between


LOOP AT itab1 ASSIGNING <fs>.
    READ TABLE itab2  with key k1 = <fs> ... 
    ....


LOOP AT itab1 ASSIGNING <fs>.
    LOOP AT itab2 WHERE k1 = <fs>
    ....

Either there is only one record (1:1) which can fulfilled the inner condion or there are several (1:C). You can not replace a necessary LOOP by a READ (as recommended above).

A LOOP on a hashed table does not make sense, hashed tables do not support ranges! You need a unique key and you must use the complete unique key, otherwise your access is not faster!

Sorted tables do supprt ranges, ranges are leading parts of the key! A Sorted table can do the same automatically what you have to do when you want to optimize a standard table manually. How the manual optimization must be done was explained above. The exit from the LOOP is essential, otherwise the performance gain is minimal. If done correctly it is as fast as the Sorted table!

Don't forget to test sy-subrc, there is not always a correspnding entry!

That is why the parallel cursor is much to complicated in the general case.

Use ASSIGNING in all LOOPs, but only with READs if the workarea is large!

I doubt that your nesting is correct, please check whether evry further LOOP must really be inside the other!