Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Very slow loop of itab

ekekakos
Participant

I have declare an internal table like

DATA: wa_collectoraction TYPE zcollectoraction,
      it_collectoraction LIKE STANDARD TABLE OF zcollectoraction.

We have the following records:

200-1000-620-201708-20170819-20170819121212.356 .............................................

200-1000-620-201708-20170819-20170819121211.356 ..............................................

200-1000-620-201708-20170815-20170815121211.356 ...............................................

200-1000-620-201707-20170710-20170710121200.356 ................................................

200-1000-620-201707-20170710-20170710121100.356 ...............................................

200-1000-620-201707-20170709-20170709121100.356 ...............................................

Then I fill the table with the following 2 records

200-1000-620-201708-20170819

200-1000-620-201707-20170710

SELECT bukrs kunnr yearmonth MAX( dat ) AS dat
FROM zcollectoraction
  INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
WHERE bukrs IN so_bukrs AND
      kunnr IN so_kunnr AND
      dat   IN so_date
GROUP BY bukrs kunnr yearmonth.

and finally I have the following loop where I am getting the MAX time of each record and then with the key fields I am getting all the remaining fields of the record.

LOOP AT it_collectoraction INTO wa_collectoraction.
PERFORM progress_bar USING 'Retrieving data...'(035)
                           sy-tabix
                           i_tab_lines.
"Get the MAX TIME for all lines in order to cover the case we have more than 1 line.
SELECT SINGLE * FROM zcollectoraction
    INTO CORRESPONDING FIELDS OF wa_collectoraction
  WHERE bukrs = wa_collectoraction-bukrs AND
        kunnr = wa_collectoraction-kunnr AND
        dat   = wa_collectoraction-dat   AND
        time  = ( SELECT MAX( time ) AS time
                    FROM zcollectoraction
                    WHERE bukrs = wa_collectoraction-bukrs AND
                          kunnr = wa_collectoraction-kunnr AND
                          dat   = wa_collectoraction-dat ).

MODIFY it_collectoraction FROM wa_collectoraction.
ENDLOOP.

This loop is doing 5 minutes for 3000 records. The itab of loop has 30500 records.

I hope that I make myself clearer.

Can someone tell me what to do in order to be faster?

Thanks in advance

1 ACCEPTED SOLUTION

ekekakos
Participant
0 Kudos

First of all I want to thank all of you for your help.

I change the logic of select by using an internal table with all records from dbtab according to the selection data of the user.

So the code became as follow:

DATA: wa_collectoraction TYPE zcollectoraction,
      it_collectoraction TYPE TABLE OF zcollectoraction,
      itsort_collectoraction TYPE HASHED TABLE OF zcollectoraction
          WITH UNIQUE KEY mandt bukrs kunnr yearmonth dat time.

FIELD-SYMBOLS: <fs_collectoraction> LIKE LINE OF it_collectoraction.

    SELECT bukrs kunnr yearmonth MAX( dat ) AS dat
      FROM zcollectoraction
        INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
      WHERE bukrs IN so_bukrs AND
            kunnr IN so_kunnr AND
            dat   IN so_date
      GROUP BY bukrs kunnr yearmonth.

    " Keep the total records which will be inserted.
    i_tab_lines = sy-dbcnt.

    SELECT * INTO TABLE itsort_collectoraction
      FROM zcollectoraction
      WHERE bukrs IN so_bukrs AND
          kunnr IN so_kunnr AND
          dat   IN so_date.

    SORT itsort_collectoraction
                BY mandt bukrs kunnr yearmonth dat time DESCENDING.

    LOOP AT it_collectoraction ASSIGNING <fs_collectoraction>.
      PERFORM progress_bar USING 'Retrieving data...'(035)
                                 sy-tabix
                                 i_tab_lines.

      READ TABLE itsort_collectoraction INTO wa_collectoraction
          WITH KEY bukrs = <fs_collectoraction>-bukrs
                    kunnr = <fs_collectoraction>-kunnr
                    yearmonth = <fs_collectoraction>-yearmonth
                    dat   = <fs_collectoraction>-dat.
      <fs_collectoraction> = wa_collectoraction.
    ENDLOOP.

This code run 43000 records in 1 minute.

The only problem is that after the first 10000 to 15000 records the process is slowing down. I don't know if there is any command to clear sth. I don't know what to clear.

Again thanks a lot all of you.

Regards

Elias

PS. In the 1st 10 sec it process 14.000 records.

In 1 minute process 38.500 and

In 1 minute & 50 seconds finished the 54.500 records.

It gives me the impression that it fulfills sth which slow-down the process.

ANY IDEA?

21 REPLIES 21

raymond_giuseppi
Active Contributor

Did you try some subquery like

SELECT bukrs kunnr yearmonth dat max( time )
  FROM zcollectoraction AS a 
  INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
  WHERE bukrs IN so_bukrs 
    AND kunnr IN so_kunnr
    AND dat   IN ( SELECT max( dat )
                   FROM zcollectoraction AS b 
                   WHERE b~bukrs EQ a~bukrs
                     AND b~kunnr EQ a~kunnr
                     AND b~dat   IN so_date
                     AND b~yearmonth EQ a~yearmonth )
  GROUP BY bukrs kunnr yearmonth dat.

Of course, check first for adequate index on the database table.

For those (most) who don't have access to your z-table, you can check performance, against any kind of loop/select single/endloop or for all entries, with

  SELECT bukrs blart cpudt MAX( cputm ) AS cputm
    FROM bkpf AS a
    INTO CORRESPONDING FIELDS OF TABLE itab
    WHERE bukrs IN so_bukrs
      AND blart IN so_blart
      AND cpudt IN ( SELECT MAX( cpudt )
                     FROM bkpf AS b
                     WHERE b~bukrs EQ a~bukrs
                       AND b~blart EQ a~blart
                       AND b~cpudt IN so_cpudt )
    GROUP BY bukrs blart cpudt.<br>

0 Kudos

The problem is in this selection. Even your it takes the same time to fill the table.

0 Kudos

So did you, as I already suggested, check for available indexes on your z-table. You should also perform an SQL trace and look with SE11 to primary keys and secondary indexes. An Abap or Performance trace with SAT is also a good idea. (Use ST12 if available)

Hint: Look for an index with your grouping keys. (client, company, customer, yearmonth and dat would be required)

0 Kudos

Raymond the table keys are client, company, customer, yearmonth, dat and time. Do I need another index?

0 Kudos

IMHO no index required. Did you analyze some SQL trace (access plan) and Abap trace (is the SQL execution the actual problem) can you post the access plan?

Don't forget that optimization rely on statistics, e.g. if most customers have only one record per month and company, a full select of required data in an internal table followed by some remove of duplicates could be faster. Also it will depends on what database type you use.

0 Kudos
runtime-analysis1.txt

The database is DB2. I am trying to find the trace and the other things you mention.

I am sending you the RuNTIME Analysis report

0 Kudos

This should come from your original code, where more than 98% of the time is opening the file again and again (select in a loop)

Now look at the explain plan from a SQL trace, from your loop/select/endloop and for the subquery statement.

pokrakam
Active Contributor
0 Kudos

At first glance it looks like you might cause a nested SELECT at the DB end. Two quick things to try:

  • UP TO 1 ROWS instead of SINGLE
  • Split your SELECT, do the Max(time) bit into an itab and then a second SELECT using FOR ALL ENTRIES.

horst_keller
Product and Topic Expert
Product and Topic Expert

A SELECT inside a loop and you wonder that it is slow?

former_member186746
Active Contributor

Also check if it isn't the progress bar that is causing the long execution time.

0 Kudos

No the ProgressBar is not responsible for the delay.

Jelena
Active Contributor
0 Kudos

Sorry, I don't understand why did you put subquery for the same table into that SELECT SINGLE. If you are looking for the MAX date value then just put it in the first SELECT. Of course, you won't be able to use SELECT * with that but since you have CORRESPONDING I guess you don't even need *. Of course, it's convenient but hardly a reason to sacrifice performance.

It might make sense to provide a data example to explain what you are looking for exactly and what result you're trying to achieve.

ekekakos
Participant
0 Kudos

Jelena,

Well, let me tell you. In the select I am taking the specific fields of the max date of the month. Then I select all the fields of the record with the max time because there is a big possibility to have more than 1 record as a max date. Here is an example.

These are the key fields: MANDT-BUKRS-KUNNR-YEARMONTH-DAT-TIME

200-1000-620-201708-20170819-20170819121212.356

200-1000-620-201708-20170819-20170819121211.356

200-1000-620-201708-20170815-20170815121211.356

200-1000-620-201707-20170710-20170710121200.356

200-1000-620-201707-20170710-20170710121100.356

200-1000-620-201707-20170709-20170709121100.356

So I want to display the MAX date of the month and if exist more than 1 record in max date I want to get the MAX time.

Well, is there any other way?

Thanks in advance

pokrakam
Active Contributor

Reading all this, I’m getting the impression a better idea is to take a step back, think about what you are trying to achieve (still unclear to me), and review your table(s) design.

0 Kudos

Why unclear Mike. We have the above 6 records and the business is asking to display the max date of the month and if for this max date exists more than 1 record then get the one with the biggest time.

Is still unclear or it became clear.

Thanks

pokrakam
Active Contributor
0 Kudos

I get that bit, but what is it you’re actually trying to achieve? We have no idea where this table data comes from, the only thing we know is it’s some custom setup.

By step back I mean review the business scenario and table. The logic you describe above, the table, and the example are inconsistent with one another. Or maybe that’s my interpretation. Your revised example shows a timestamp-type field that includes the date, so why not select max( ) on that straight away? The select doesn’t do a conditional lookup of time - there is no logic in your examples to determine if there are one or many records for the same date (“if for this max date exists more than 1 record then...”).

Sandra_Rossi
Active Contributor
0 Kudos

You may try with the following kind of query before the loop, and use a READ TABLE inside the loop - of course you must check the database execution plan and optimize it as much as possible (maybe it will be counter-performing) - in my example, the columns CARRID, CONNID, FLDATE, DEPTIME correspond respectively to your columns BUKRS, KUNNR, DAT, TIME :

DATA lt_sflights2 TYPE TABLE OF sflights2.
SELECT * FROM sflights2 AS a INTO TABLE lt_sflights2
  WHERE fldate IN (
      SELECT MAX( fldate ) AS fldate FROM sflights2 AS c
        WHERE c~carrid = a~carrid
          AND c~connid = a~connid
        GROUP BY carrid connid
      )
    AND deptime IN (
      SELECT MAX( deptime ) FROM sflights2 AS b
        WHERE b~carrid = a~carrid
          AND b~connid = a~connid
          AND b~fldate = a~fldate
        GROUP BY carrid connid fldate
      ).

former_member186746
Active Contributor
0 Kudos

Hi,

If there are only 3000 records in your Z-table then just select all before the loop in an internal table and then use a sort and a read to get the information you require.

It doesn't make a lot of sense to do 35.000 selects on a table with fewer entries .

Kind regards, Rob Dielemans

0 Kudos

Rob If I select with 1 company and 3 months then I will take 67000 records.

ekekakos
Participant
0 Kudos

First of all I want to thank all of you for your help.

I change the logic of select by using an internal table with all records from dbtab according to the selection data of the user.

So the code became as follow:

DATA: wa_collectoraction TYPE zcollectoraction,
      it_collectoraction TYPE TABLE OF zcollectoraction,
      itsort_collectoraction TYPE HASHED TABLE OF zcollectoraction
          WITH UNIQUE KEY mandt bukrs kunnr yearmonth dat time.

FIELD-SYMBOLS: <fs_collectoraction> LIKE LINE OF it_collectoraction.

    SELECT bukrs kunnr yearmonth MAX( dat ) AS dat
      FROM zcollectoraction
        INTO CORRESPONDING FIELDS OF TABLE it_collectoraction
      WHERE bukrs IN so_bukrs AND
            kunnr IN so_kunnr AND
            dat   IN so_date
      GROUP BY bukrs kunnr yearmonth.

    " Keep the total records which will be inserted.
    i_tab_lines = sy-dbcnt.

    SELECT * INTO TABLE itsort_collectoraction
      FROM zcollectoraction
      WHERE bukrs IN so_bukrs AND
          kunnr IN so_kunnr AND
          dat   IN so_date.

    SORT itsort_collectoraction
                BY mandt bukrs kunnr yearmonth dat time DESCENDING.

    LOOP AT it_collectoraction ASSIGNING <fs_collectoraction>.
      PERFORM progress_bar USING 'Retrieving data...'(035)
                                 sy-tabix
                                 i_tab_lines.

      READ TABLE itsort_collectoraction INTO wa_collectoraction
          WITH KEY bukrs = <fs_collectoraction>-bukrs
                    kunnr = <fs_collectoraction>-kunnr
                    yearmonth = <fs_collectoraction>-yearmonth
                    dat   = <fs_collectoraction>-dat.
      <fs_collectoraction> = wa_collectoraction.
    ENDLOOP.

This code run 43000 records in 1 minute.

The only problem is that after the first 10000 to 15000 records the process is slowing down. I don't know if there is any command to clear sth. I don't know what to clear.

Again thanks a lot all of you.

Regards

Elias

PS. In the 1st 10 sec it process 14.000 records.

In 1 minute process 38.500 and

In 1 minute & 50 seconds finished the 54.500 records.

It gives me the impression that it fulfills sth which slow-down the process.

ANY IDEA?

Hmmm... SORT of a hashed table ? And READ TABLE on a hashed table without WITH TABLE KEY. I'm afraid it's not very well performing.

Because it's sorted manually, the internal table should be declared:

DATA itsort_collectoraction TYPE STANDARD TABLE OF zcollectoraction. 

, and right after the SORT (needed because of TIME descending), you should add this line:

DELETE ADJACENT DUPLICATES FROM itsort_collectoraction COMPARING bukrs kunnr yearmonth dat.

, and the READ TABLE should have a BINARY SEARCH.

OR transfer the internal table to a new hashed table with key bukrs kunnr yearmonth dat (without the time field), right after the DELETE ADJACENT, and use a READ TABLE ... WITH TABLE KEY on that hashed table.