Skip to Content
-1

Very slow loop of itab

I have declare an internal table like

DATA: wa_collectoraction TYPE zcollectoraction,
      it_collectoraction LIKE STANDARD TABLE OF zcollectoraction.

We have the following records:

200-1000-620-201708-20170819-20170819121212.356 .............................................

200-1000-620-201708-20170819-20170819121211.356 ..............................................

200-1000-620-201708-20170815-20170815121211.356 ...............................................

200-1000-620-201707-20170710-20170710121200.356 ................................................

200-1000-620-201707-20170710-20170710121100.356 ...............................................

200-1000-620-201707-20170709-20170709121100.356 ...............................................

Then I fill the table with the following 2 records

200-1000-620-201708-20170819

200-1000-620-201707-20170710

SELECT bukrs kunnr yearmonth MAX( dat ) AS dat
FROM zcollectoraction
  INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
WHERE bukrs IN so_bukrs AND
      kunnr IN so_kunnr AND
      dat   IN so_date
GROUP BY bukrs kunnr yearmonth.

and finally I have the following loop where I am getting the MAX time of each record and then with the key fields I am getting all the remaining fields of the record.

LOOP AT it_collectoraction INTO wa_collectoraction.
PERFORM progress_bar USING 'Retrieving data...'(035)
                           sy-tabix
                           i_tab_lines.
"Get the MAX TIME for all lines in order to cover the case we have more than 1 line.
SELECT SINGLE * FROM zcollectoraction
    INTO CORRESPONDING FIELDS OF wa_collectoraction
  WHERE bukrs = wa_collectoraction-bukrs AND
        kunnr = wa_collectoraction-kunnr AND
        dat   = wa_collectoraction-dat   AND
        time  = ( SELECT MAX( time ) AS time
                    FROM zcollectoraction
                    WHERE bukrs = wa_collectoraction-bukrs AND
                          kunnr = wa_collectoraction-kunnr AND
                          dat   = wa_collectoraction-dat ).

MODIFY it_collectoraction FROM wa_collectoraction.
ENDLOOP.

This loop is doing 5 minutes for 3000 records. The itab of loop has 30500 records.

I hope that I make myself clearer.

Can someone tell me what to do in order to be faster?

Thanks in advance

Add comment
10|10000 characters needed characters exceeded

  • A SELECT inside a loop and you wonder that it is slow?

  • Sorry, I don't understand why did you put subquery for the same table into that SELECT SINGLE. If you are looking for the MAX date value then just put it in the first SELECT. Of course, you won't be able to use SELECT * with that but since you have CORRESPONDING I guess you don't even need *. Of course, it's convenient but hardly a reason to sacrifice performance.

    It might make sense to provide a data example to explain what you are looking for exactly and what result you're trying to achieve.

  • Jelena,

    Well, let me tell you. In the select I am taking the specific fields of the max date of the month. Then I select all the fields of the record with the max time because there is a big possibility to have more than 1 record as a max date. Here is an example.

    These are the key fields: MANDT-BUKRS-KUNNR-YEARMONTH-DAT-TIME

    200-1000-620-201708-20170819-20170819121212.356

    200-1000-620-201708-20170819-20170819121211.356

    200-1000-620-201708-20170815-20170815121211.356

    200-1000-620-201707-20170710-20170710121200.356

    200-1000-620-201707-20170710-20170710121100.356

    200-1000-620-201707-20170709-20170709121100.356

    So I want to display the MAX date of the month and if exist more than 1 record in max date I want to get the MAX time.

    Well, is there any other way?

    Thanks in advance

  • Get RSS Feed

7 Answers

  • Best Answer
    Oct 04, 2017 at 02:23 PM

    First of all I want to thank all of you for your help.

    I change the logic of select by using an internal table with all records from dbtab according to the selection data of the user.

    So the code became as follow:

    DATA: wa_collectoraction TYPE zcollectoraction,
          it_collectoraction TYPE TABLE OF zcollectoraction,
          itsort_collectoraction TYPE HASHED TABLE OF zcollectoraction
              WITH UNIQUE KEY mandt bukrs kunnr yearmonth dat time.
    
    FIELD-SYMBOLS: <fs_collectoraction> LIKE LINE OF it_collectoraction.
    
        SELECT bukrs kunnr yearmonth MAX( dat ) AS dat
          FROM zcollectoraction
            INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
          WHERE bukrs IN so_bukrs AND
                kunnr IN so_kunnr AND
                dat   IN so_date
          GROUP BY bukrs kunnr yearmonth.
    
        " Keep the total records which will be inserted.
        i_tab_lines = sy-dbcnt.
    
        SELECT * INTO TABLE itsort_collectoraction
          FROM zcollectoraction
          WHERE bukrs IN so_bukrs AND
              kunnr IN so_kunnr AND
              dat   IN so_date.
    
        SORT itsort_collectoraction
                    BY mandt bukrs kunnr yearmonth dat time DESCENDING.
    
        LOOP AT it_collectoraction ASSIGNING <fs_collectoraction>.
          PERFORM progress_bar USING 'Retrieving data...'(035)
                                     sy-tabix
                                     i_tab_lines.
    
          READ TABLE itsort_collectoraction INTO wa_collectoraction
              WITH KEY bukrs = <fs_collectoraction>-bukrs
                        kunnr = <fs_collectoraction>-kunnr
                        yearmonth = <fs_collectoraction>-yearmonth
                        dat   = <fs_collectoraction>-dat.
          <fs_collectoraction> = wa_collectoraction.
        ENDLOOP.

    This code run 43000 records in 1 minute.

    The only problem is that after the first 10000 to 15000 records the process is slowing down. I don't know if there is any command to clear sth. I don't know what to clear.

    Again thanks a lot all of you.

    Regards

    Elias

    PS. In the 1st 10 sec it process 14.000 records.

    In 1 minute process 38.500 and

    In 1 minute & 50 seconds finished the 54.500 records.

    It gives me the impression that it fulfills sth which slow-down the process.

    ANY IDEA?

    Add comment
    10|10000 characters needed characters exceeded

    • Hmmm... SORT of a hashed table ? And READ TABLE on a hashed table without WITH TABLE KEY. I'm afraid it's not very well performing.

      Because it's sorted manually, the internal table should be declared:

      DATA itsort_collectoraction TYPE STANDARD TABLE OF zcollectoraction. 

      , and right after the SORT (needed because of TIME descending), you should add this line:

      DELETE ADJACENT DUPLICATES FROM itsort_collectoraction COMPARING bukrs kunnr yearmonth dat.

      , and the READ TABLE should have a BINARY SEARCH.

      OR transfer the internal table to a new hashed table with key bukrs kunnr yearmonth dat (without the time field), right after the DELETE ADJACENT, and use a READ TABLE ... WITH TABLE KEY on that hashed table.

  • Oct 03, 2017 at 11:24 AM

    Did you try some subquery like

    SELECT bukrs kunnr yearmonth dat max( time )
      FROM zcollectoraction AS a 
      INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
      WHERE bukrs IN so_bukrs 
        AND kunnr IN so_kunnr
        AND dat   IN ( SELECT max( dat )
                       FROM zcollectoraction AS b 
                       WHERE b~bukrs EQ a~bukrs
                         AND b~kunnr EQ a~kunnr
                         AND b~dat   IN so_date
                         AND b~yearmonth EQ a~yearmonth )
      GROUP BY bukrs kunnr yearmonth dat.
    

    Of course, check first for adequate index on the database table.

    For those (most) who don't have access to your z-table, you can check performance, against any kind of loop/select single/endloop or for all entries, with

      SELECT bukrs blart cpudt MAX( cputm ) AS cputm
        FROM bkpf AS a
        INTO CORRESPONDING FIELDS OF TABLE itab
        WHERE bukrs IN so_bukrs
          AND blart IN so_blart
          AND cpudt IN ( SELECT MAX( cpudt )
                         FROM bkpf AS b
                         WHERE b~bukrs EQ a~bukrs
                           AND b~blart EQ a~blart
                           AND b~cpudt IN so_cpudt )
        GROUP BY bukrs blart cpudt.<br>
    Add comment
    10|10000 characters needed characters exceeded

    • This should come from your original code, where more than 98% of the time is opening the file again and again (select in a loop)

      Now look at the explain plan from a SQL trace, from your loop/select/endloop and for the subquery statement.

  • Oct 03, 2017 at 09:05 PM

    Reading all this, I’m getting the impression a better idea is to take a step back, think about what you are trying to achieve (still unclear to me), and review your table(s) design.

    Add comment
    10|10000 characters needed characters exceeded

    • I get that bit, but what is it you’re actually trying to achieve? We have no idea where this table data comes from, the only thing we know is it’s some custom setup.

      By step back I mean review the business scenario and table. The logic you describe above, the table, and the example are inconsistent with one another. Or maybe that’s my interpretation. Your revised example shows a timestamp-type field that includes the date, so why not select max( ) on that straight away? The select doesn’t do a conditional lookup of time - there is no logic in your examples to determine if there are one or many records for the same date (“if for this max date exists more than 1 record then...”).

  • Oct 03, 2017 at 11:47 AM

    Also check if it isn't the progress bar that is causing the long execution time.

    Add comment
    10|10000 characters needed characters exceeded

  • Oct 03, 2017 at 11:24 AM
    -2

    At first glance it looks like you might cause a nested SELECT at the DB end. Two quick things to try:

    • UP TO 1 ROWS instead of SINGLE
    • Split your SELECT, do the Max(time) bit into an itab and then a second SELECT using FOR ALL ENTRIES.
    Add comment
    10|10000 characters needed characters exceeded

  • Oct 04, 2017 at 01:06 PM

    You may try with the following kind of query before the loop, and use a READ TABLE inside the loop - of course you must check the database execution plan and optimize it as much as possible (maybe it will be counter-performing) - in my example, the columns CARRID, CONNID, FLDATE, DEPTIME correspond respectively to your columns BUKRS, KUNNR, DAT, TIME :

    DATA lt_sflights2 TYPE TABLE OF sflights2.
    SELECT * FROM sflights2 AS a INTO TABLE lt_sflights2
      WHERE fldate IN (
          SELECT MAX( fldate ) AS fldate FROM sflights2 AS c
            WHERE c~carrid = a~carrid
              AND c~connid = a~connid
            GROUP BY carrid connid
          )
        AND deptime IN (
          SELECT MAX( deptime ) FROM sflights2 AS b
            WHERE b~carrid = a~carrid
              AND b~connid = a~connid
              AND b~fldate = a~fldate
            GROUP BY carrid connid fldate
          ).
    
    Add comment
    10|10000 characters needed characters exceeded

  • Oct 04, 2017 at 01:20 PM

    Hi,

    If there are only 3000 records in your Z-table then just select all before the loop in an internal table and then use a sort and a read to get the information you require.

    It doesn't make a lot of sense to do 35.000 selects on a table with fewer entries .

    Kind regards, Rob Dielemans

    Add comment
    10|10000 characters needed characters exceeded