Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Aggregation with FOR ALL ENTRIES (I know it's not possible, but...)

Former Member
0 Kudos

Hi Experts,

I try to tune a statement on a big table by removing it from a LOOP (50.000 statement calls)

The SELECT list contains a complicated aggregation that I want ro reuse somehow.

Old code:

Loop at datapackage assigning <f-dp> .
       Select single  
            soursystem
            doc_number
            s_ord_item.
            Max(i2) Max(i3) Min(date) Max(date) Sum(qty) Min(lt1) ....  and  alot of others....
         INTO 
           (wa-soursystem,
            wal-doc_number,
            wa-s_ord_item, 
         ....)
           from  /bic/abh_dso
          WHERE
            soursystem = <f_dp>-soursystem AND
            doc_number = <f_dp>-doc_number AND
            s_ord_item = <f_dp>-s_ord_item 
    GROUP BY
            soursystem
            doc_number
            s_ord_item.

...

Endloop.

The table datapackage (50.0000 rows) can be used with a For all entries to filter the rows in

only a few calls (depending on FAE configuration) from the DB table into an internal table RES_TAB

wich has (of course) the detailed records.

BUT I want to reuse the aggregation used in the statement SELECT list - wich is not allowed for a FAE.

but I guess it's a pain to rebuilt the aggregation with the internal table and could be really annoying

and source code would be really hard to understand (wich isn't when using the SELECTs aggregations).

Is there any clue HOW avoid that ?

7 REPLIES 7

ravi_lanjewar
Contributor
0 Kudos

Hi,

Check the following logic. It help you resove your problem


data: count type i.
RANGES: r_vbeln FOR vbak-vbeln.

LOOP AT it_vbak.

  count = count + 1.

  r_vbeln-option = 'EQ'.
  r_vbeln-sign    =  'I'.
  r_vbeln-low     = it_vbak-vbeln.
  APPEND r_vbeln.

  IF count = 100 and r_vbeln[] is not initial.
    SELECT vbeln SUM( netwr ) APPENDING TABLE it_vbap
      FROM vbap
        WHERE vbeln IN r_vbeln
        GROUP BY vbeln.
    REFRESH r_vbeln.
    CLEAR count.
  ENDIF.

  AT LAST.
    IF count NE 100 and r_vbeln[] is not initial.
      SELECT vbeln SUM( netwr ) APPENDING TABLE it_vbap
        FROM vbap
          WHERE vbeln IN r_vbeln
          GROUP BY vbeln.
    ENDIF.
  ENDAT.

ENDLOOP.

Above used to avoid to execute the same 50K times in loop and also avoid the data transfer.

In above program it exectue at a time maximum of 100 entries. You can change according to you.

But before appling this logic you have to check which kind of data available in internal table datapackage.

0 Kudos

Hello Ravi,

In your case why not join VBAK & VBAP? Both these tables don't have buffering switched on, so you can't argue JOIN would override the buffer et al.

What are your arguments for using this approach? Do you've any runtime analysis readings to support your case?

BR,

Suhas

0 Kudos

Hi Suhas,

It is an example consider table vbak and vbap. Don't confuse with this logic.

For your convinience I am changing table name as general scinario.



data: count type i.

RANGES: r_field1 FOR table1-field1.
 
LOOP AT it_table1.
 
  count = count + 1.
 
  r_field1-option = 'EQ'.
  r_field1-sign    =  'I'.
  r_field1-low     = it_table1-field1.
  APPEND r_field1.
 
  IF count = 100 and r_field1[] is not initial.
    SELECT field1 SUM(amount) APPENDING TABLE it_table2
      FROM xyz
        WHERE field1 IN r_field1
        GROUP BY field1.
    REFRESH r_field1.
    CLEAR count.
  ENDIF.
 
  AT LAST.
    IF count NE 100 and r_field1[] is not initial.
    SELECT field1 SUM(amount) APPENDING TABLE it_table2
      FROM xyz
        WHERE field1 IN r_field1
        GROUP BY field1.
    ENDIF.
  ENDAT.
 
ENDLOOP.

Is it ok for now ?

kesavadas_thekkillath
Active Contributor
0 Kudos

Hi,

You have to write a logic with the internal table table data .

You have to make use of collect statement if required, sort ascending,descending instead of MAX and MIN values. May be you can make use of ranges instead of for all entries for a set of data in a loop ( Select inside loop ... should consider the performance ) .

0 Kudos

Hi Keshav,

wanted to avoid the procedural implementation.

I thought to materialize the 3 WHERE columns soursystem, doc_number and s_ord_item in a transparent table ("filter table") and use them as Primary key index.

Then I could do an EXISTS subquery on that DB table and filter out / aggregate without FAE .

Disadvantage I have to maintain the DB table and write the rows to it and remove them later.

I should compare the runtime for additional I/O against the spared time by eliminating the 50.000 loop cycles.

1) write datapackage columns soursystem, doc_number and s_ord_item to transparent table ZFILTER.

2) SELECT with aggregation and filter with EXISTS subquery

Select 
            soursystem
            doc_number
            s_ord_item.
            Max(i2) Max(i3) Min(date) Max(date) Sum(qty) Min(lt1) ....  and  alot of others....
         INTO TABLE RES_TAB
           from  /bic/abh_dso as t1
          WHERE EXISTS 
                                    (select soursystem from ZFILTER as t2  
                                           where t1~soursystem = t2~soursystem AND
                                                       t1~doc_number =  t2~doc_number AND
                                                      t1~s_ord_item =  t2~-s_ord_item )
    GROUP BY
            soursystem
            doc_number
            s_ord_item.

Edited by: YukonKid on Mar 31, 2011 3:01 PM

0 Kudos

Inserting 50000 record in table is also time consuming because it transfer 50000 record in database and read same data again using sub query and apply search operation and again send back result.

But I don't know why do you want to avoid the loop ? Avoiding loop will not resolve you problem if it is really performance issue.

0 Kudos

I don't know if it would be much faster, but maybe you could try something like this:


DATA: lt_buffer TYPE HASHED TABLE OF ts_buffer WITH UNIQUE KEY soursystem doc_number s_ord_item.

DEFINE do_max.
  IF ( ls_buffer-&1 > <buffer>-&1 ).
    <buffer>-&1 = ls_buffer-&1.
  ENDIF.
END-OF-DEFINITION.

DEFINE do_min.
  IF ( ls_buffer-&1 < <buffer>-&1 ).
    <buffer>-&1 = ls_buffer-&1.
  ENDIF.
END-OF-DEFINITION.

DEFINE do_sum.
  ADD ls_buffer-&1 TO <buffer>-&1.
END-OF-DEFINITION.

SELECT soursystem
       doc_number
       s_ord_item
       i2 i3 date date qty lt1 ....  and  alot of others....
            
       INTO ls_buffer
       
       FOR ALL ENTRIES IN datapackage   
         
       WHERE soursystem = datapackage-soursystem AND
             doc_number = datapackage-doc_number AND
             s_ord_item = datapackage-s_ord_item .
   
   READ TABLE lt_buffer ASSIGNING <buffer> WITH TABLE KEY soursystem = ls_buffer-soursystem
                                                          doc_number = ls_buffer-doc_number
                                                          s_ord_item = ls_buffer-s_ord_item.
   
   IF ( sy-subrc NE 0 ).
     INSERT ls_buffer INTO TABLE lt_buffer.
   ELSE.
     do_max i2.
     do_max i3.
     do_min date.
     do_max date.
     do_sum qty.
     do_min lt1.
     ...
   ENDIF.
   
 ENDSELECT.

Edited by: Carsten Grafflage on Apr 8, 2011 1:46 PM