Skip to Content
author's profile photo Former Member
Former Member

Regarding Table Buffering..

Hi,

I have a requirement to get the data from SKAT table i.e - Text field (TXT20) for the corresponding GL Account.

My issue is I have to get the TXT20 for the corresponding GL accounts selected into an internal table.

But as SKAT table is buffered it has performance issue..

I have to use..select single...

Please let me know how to do that..

To get TXT20 from SKAT for the corresponding GL Accounts in the internal table.

Reagrds,

Priyanka.

Add a comment
10|10000 characters needed characters exceeded

Related questions

7 Answers

  • Best Answer
    author's profile photo Former Member
    Former Member
    Posted on May 21, 2009 at 01:21 PM

    >

    > But as SKAT table is buffered it has performance issue.

    Buffering is done to improve performance.

    Rob

    Add a comment
    10|10000 characters needed characters exceeded

    • Former Member Former Member

      >

      > Can u please explain?

      Like this:

        loop at it_bseg.
          read table it_skat with key
              spras = 'EN'
              ktopl = p_bukrs
              saknr = it_bseg-hkont
              binary search.
          if sy-subrc NE 0.
            select single spras ktopl saknr txt20
              from skat
              into it_skat
              where spras = 'E'
                 and ktopl = p_bukrs
                 and saknr = it_bseg-hkont.
            if sy-subrc = 0.
              append it_skat.
              sort it_skat by saknr.
            endif.
          endif.
        endloop.

      This uses standard tables and binary searches. You may want to use sorted or hashed tables.

      Rob

  • author's profile photo Former Member
    Former Member
    Posted on May 21, 2009 at 01:18 PM

    Hi,

    Use below code

    PARAMETERS: 
    COA LIKE T004-KTOPL DEFAULT 'CAEK',
    
    SKA1 = SPACE.
    
    SELECT * FROM SKA1 WHERE KTOPL = COA.
    
    SELECT SINGLE * FROM SKAT WHERE SPRAS = 'E' AND
      KTOPL = COA AND
      SAKNR = SKA1-SAKNR.
    
    IF SY-SUBRC = 0.
      ITAB-TXT20 = SKAT-TXT20.
    ENDIF.

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on May 22, 2009 at 09:13 AM

    for single record buffered tables you MUST use

    LOOP at itab
       SELECT SINGLE
                     FROM dbtab
                     WHERE ... fully specified key with equal conditions
    ENDLOOP.
    

    It will be much faster than any other access. But when yoz measure, you should not measure

    how the buffer is filled!

    that is also a reason, why AVOID SELECT SINGLE IN A LOOP is nonsense!

    it always depends.

    Siegfried

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on May 22, 2009 at 03:02 PM

    > the FAE entries are in bright yellow, but the SELECT SINGLE entries are in dull yellow

    meaning that they appear in the SQL trace, which is actually the database trace.

    The SELECT SINGLE should disappear from the db-trace and move to the buffer trace (must be switched on), then they are blue (dull and bright :).

    I did not look up the table, but here SELECT SINGLE means the primary key must be specified,

    not just any key with SELECT SINGLE, otherwise single-record buffer will not be accessed.

    Siegfried

    Add a comment
    10|10000 characters needed characters exceeded

    • Hi Rob,

      sure i understand your point (and i had a good sleep).

      You are right in this example it depends.

      It depends on the number of duplicates, blocking factor size and the database used.

      As far as i know FAE works in this way (sending the values, removing duplicates, doing nested loop)

      only in DB6 and MSSQL.

      For me, i try to remove the duplicates from driver tables to avoid unnecessary calls (to whatever layer) if possible, since avoidance is still the best optimization.

      > Rob Burbank wrote:}

      > But again, in the real world, I wouldn't use either of the methods.

      Yes, with your internal table approach you are effectively removing the identical SELECT SINGLES from the table buffer (and execute these identical calls as a READ on an internal table).

      The speed up you get has to be paid with memory consumption since you buffer now twice (the SAP table buffer with your SELECT single does and your ABAP program in the internal tables does.

      The faster you want to go, the more memory you need seems to be the general rule 😊

      Thanks for that interesting discussion.

      Kind regards,

      Hermann

  • Posted on May 23, 2009 at 08:47 AM

    I did not follow the whole discussion, but even in real world ... if you expect that a driver table

    for FAE contains duplicates, then use SORT and DELETE ADJACENT DUPLICATES before

    you execute the SELECT.

    DISTINCT for FAE, I don't understand how thiis could be done on the Database side, the blocks

    are send to the DB and processed independently, duplicates in different blocks must be removed

    on the application server.

    =>but if anybody knows that better, please tell me how it works.

    And please never sort inside a loop !!!!!

    loop at it_bseg.
        read table it_skat with key
            spras = 'EN'
            ktopl = p_bukrs
            saknr = it_bseg-hkont
            binary search.
        if sy-subrc NE 0.
          select single spras ktopl saknr txt20
            from skat
            into it_skat
            where spras = 'E'
               and ktopl = p_bukrs
               and saknr = it_bseg-hkont.
          if sy-subrc = 0.
            append it_skat.
            sort it_skat by saknr.
          endif.
        endif.
      endloop.
    

    Use a sorted table or combine a READ BINARY SEARCH with a INSERT INDEX this is the insert for

    the sorted standard table.

    Siegfried

    Add a comment
    10|10000 characters needed characters exceeded

    • Hi,

      small addition:

      Removing duplicates within blocks is an interesting topic. DB6 and MSSLQ remove duplicates

      before executing the distincts. (MSSQL does a union on th input (which removes duplicates) and

      then a nested loop on the remaining values).

      I just realized that ORACLE does it as well on 10.2.0.4 allthough it is not obvious from

      the plan or the documentation.

      I have no clue about DB2 and DB4 and MAXDB .

      You can guess from the plans... .

      MAXDB (UNION ALL for each avlue, then distinct)

      - seems to execute all (no removal of duplicates for input) but remove the duplicates from the result set (output).

      DB4 (OR CONCATENATION for each avlue)

      - seems to execute and return all to the DBI

      DB2 (UNION ALL for each avlue)

      - seems to execute and return all to the DBI

      For MAXDB i checked the page accesses it looks like all values are executed 5 page accesser per row. In MAXDB the primary index has more levels since it contains the data.

      For the others, you never know... see ORACLE example.

      It is (for me) not so easy to find it out, because it is DB specific and sometimes you can not get these deails within SAP but only on DB level. But even if on SAP level the information is available

      you have to know how to get, where to find and how to read it... . But i'll try when i have time... .

      Removing duplicates within blocks is of course more efficient if we have big blocks:

      DB6 uses for OR concat 30 and for IN 30 or 60 (default, depends on SAP release).

      MSSQL uses for OR concat 5 and for IN 255 (default, depends on SAP release).

      ORACLE uses 5 in any case.

      Generaly these values must not be changed on system wide level. For optimizing reasons we increase

      it sometimes on statement level, on ORACLE databases.

      But removing duplicates is not the reason for that. As already said, this should

      be done in ABAP (DELETE ADJACENT DUPLICATES).

      We do that for another reason: to have less DB calls and therefore less communication

      overhead. But this is a different topic.

      Kind regards,

      Hermann

  • Posted on May 23, 2009 at 08:54 AM

    > But I also tried another method

    you should actually not mix up the two buffering method, if table buffering is applicable then you should use it and you shoud not use the self-made buffer.

    Buffer-methods/functions/forms are very useful, and necessary in cases where the table buffering does not work.

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on May 24, 2009 at 04:59 PM

    Hallo Hermann,

    thank you for the detailed explanation, actually I did not see, that you talked about in block

    duplicates.

    Larger blocksizes are however quite rare:

    I saw them only on Max DB

    > DB6 uses for OR concat 30 and for IN 30 or 60 (default, depends on SAP release).

    > MSSQL uses for OR concat 5 and for IN 255 (default, depends on SAP release).

    > ORACLE uses 5 in any case.

    this is true, but usually rsdb/prefer_in_itab_opt is zero on DB6 and MSSQL.

    Siegfried

    Add a comment
    10|10000 characters needed characters exceeded

    • Hi Siegfried,

      >

      > this is true, but usually rsdb/prefer_in_itab_opt is zero on DB6 and MSSQL.

      > Siegfried

      yes, rsdb/prefer_join is set instead (since they do a join).

      DB6:

      max_blocking_factor = 30, max_in_blocking_factor = 60,

      min_blocking_factor = 1, min_in_blocking_factor = 1,

      prefer_union_all = 1, prefer_join = 1,

      prefer_fix_blocking = 0, prefer_in_itab_opt = 0,

      MSSQL:

      max_blocking_factor = 50, max_in_blocking_factor = 255,

      min_blocking_factor = 5, min_in_blocking_factor = 10,

      prefer_union_all = 1, prefer_join = 1,

      prefer_fix_blocking = 0, prefer_in_itab_opt = 0,

      With a SELECT like this (i know it doesn't make sence, lets just assume msgnr

      would be the pirmary key):

      select * from t100 into table it_t100 bypassing buffer

      for all entries in it

      where msgnr = it-msgnr.

      i get 60 bind variables on DB6 per Block with the above settings.

      Seems like max_in_blocking_factor = 60 used here.

      On MSSQL it's 50, seems like max_blocking_factor = 50 is used there.

      It is not so uncommon to see such sizes i think.

      Since I work more often on DB6 than on MSSQL i can only tell for DB6.

      There i see it a lot of times working quite well i would say.

      I think that could explain why Rob's FAE with 10000 values and 68 distincts

      on DB6 is slightly faster then 10000 select single on the table buffer... .

      Kind regards,

      Hermann

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.