Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Static or global internal table vs buffered table.

former_member198308
Active Participant
0 Kudos

Hi experts

Please, let me know your thoughts about wich statement is better in terms of performace.

Facts:

my_table is a transparent table with 10 records.

The statement is called 5000 times in the program (like a formula in SD Pricing)

1st Statement:

( the table my_table is NOT buffered in SE13)


Statics: gt_my_table type standard table of my_table.

if gt_my_table[] is initial.
   select * from my_table into table gt_my_table.
endif.
   read table gt_my_table with key field1 = lv_field1.

2nd Statement

(in the second statement the table is fully buffered in SE13)

data: wa_my_table type my_table.

select single * from my_table into wa_my_table
 where field = lv_field1.

Thanks in advance

Matías

1 ACCEPTED SOLUTION

yuri_ziryukin
Employee
Employee
0 Kudos

Hello Matias,

I would also advise to combine both 1st and 2nd methods. Buffer the table fully in SE13 and at the same time access it in a way from your example 1).

By the way, keep in mind the time difference of updating the table buffer between application servers. But I guess your table with 10 records does not change often.

Regards,

Yuri

11 REPLIES 11

Sandra_Rossi
Active Contributor
0 Kudos

Hi,

Based on rules-of-thumb from ADM315 training, it is said that reading a buffered table is 100 times faster than reading a database table. As you do the SELECT 5000 times, it could be 50 times faster to do only 1 SELECT (first solution).

But buffering could be also useful in the first solution, if the table is accessed many times by many users during the day.

Sandra

yuri_ziryukin
Employee
Employee
0 Kudos

Hello Matias,

I would also advise to combine both 1st and 2nd methods. Buffer the table fully in SE13 and at the same time access it in a way from your example 1).

By the way, keep in mind the time difference of updating the table buffer between application servers. But I guess your table with 10 records does not change often.

Regards,

Yuri

0 Kudos

Hi Yuri,

> I would also advise to combine both 1st and 2nd methods. Buffer the table fully in SE13 and at the same time access it in a way from your example 1).

really? Seriouslly? That's what i would call double buffering (and don't recommend generally).

I would definatelly go for either 1. or 2. depending on the table usage.

I would put it in the buffer it other programs use it in the same way and table can be buffered, use statics if the table is used in only in one program... . Performance wise the 1st option should be slightly faster than the table buffers. (Internal tables can be accessed in the single digit micro second range, single record table buffer could be a little bit more, the buffer trace (done in the DBI) it is single digit micro seconds as well but from ABAP VM (ATRA) it might be a (very) little bit more).

Kind regards,

Hermann

0 Kudos

Hello Hermann,

why not?

First of all I would always buffer customizing tables like that in SE13. At the end, who knows what other programs are reading this table and how often (any where used list might not work due to the dynamic SQL calls). It might be not the single place in coding where this table is used.

Next, I expect reading from the table buffer in DBI to be a little (as you mentioned) slower. Considering that in this case reading may take place 5000 times, it will be already in the millisecond area.

And you know, sometimes in real life examples there are even more executions, not 5000, but 500 000. The difference will be even larger.

I don't think that I was recommending a very bad thing in this particular case.

You are certainly right that this recommendation is not a general one and should not be applied in any case. Here I agree with you.

Cheers,

Yuri

0 Kudos

Hi Yuri,

i didn't meant that you reecommended a bad thing, not at all.

If we want to squeeze out the last bit in terms of performance it's absolutely right.

But in general the table buffers should be good enough, that's what i meant.

In customer code i often see internal table buffers build on tables that are in the

table buffer. An i usually recommend not to double buffer data but stick to the

table buffers since that usually should be good enough.

Kind regards,

Hermann

0 Kudos

Hi Yuri,

>

> i didn't meant that you reecommended a bad thing, not at all.

> If we want to squeeze out the last bit in terms of performance it's absolutely right.

>

> But in general the table buffers should be good enough, that's what i meant.

> In customer code i often see internal table buffers build on tables that are in the

> table buffer. An i usually recommend not to double buffer data but stick to the

> table buffers since that usually should be good enough.

>

> Kind regards,

>

> Hermann

Totally agree.

volker_borowski2
Active Contributor
0 Kudos

Hi experts

> Please, let me know your thoughts about wich statement is better in terms of performace.

> Facts:

> my_table is a transparent table with 10 records.

> The statement is called 5000 times in the program (like a formula in SD Pricing)

>

Sorry if I am late about this topic

If your program runs long and your 10-records-table might receive updates during runtime of the report:

-> definately use a local table to ensure a consistent dataset throughout the runtime.

If the table is fixed an will not been changed at all -> Go SE13 buffered.

my 2 cent (either Euro or USD, both down anyway

Volker

0 Kudos

Hi to everyone.

I created 3 programs

1st: A select to a NON Buffered Table (10 records)

REPORT  ZMAT1.

data: rtime type i,
      lt_anul type TAB_ANUL.

SET RUN TIME CLOCK RESOLUTION LOW.

GET RUN TIME FIELD rtime.

Select single * into lt_anul from TAB_ANUL
  where einri = 'CMFD'
    and STOID = '002'
    and BLART = 'RV'.

GET RUN TIME FIELD rtime.

write rtime.

2nd Program: Buffer in Program

REPORT  ZMAT2.

data: rtime type i,
      lt_anul type standard table of TAB_ANUL with header line.

SET RUN TIME CLOCK RESOLUTION LOW.


Select  * into table lt_anul from TAB_ANUL
  where einri = 'CMFD'
    and STOID = '002'
    and BLART = 'RV'.

GET RUN TIME FIELD rtime.

read table lt_anul with key einri = 'CMFD' stoid = '002' blart = 'RV'.


GET RUN TIME FIELD rtime.

write rtime.

3rd Program: Select to a FULLY Buffered Table (10 Records)

REPORT  ZMAT3.

data: rtime type i,
      lt_anul type TAB_QXTAG.

SET RUN TIME CLOCK RESOLUTION LOW.

GET RUN TIME FIELD rtime.

Select single * into lt_anul from TAB_QXTAG
  where einri = 'CMFD'
    and tagru = '09-DMBA'.

GET RUN TIME FIELD rtime.

write rtime.

The results were:

1st Program: from 500 to 1500 microseconds

2nd Program: from 15 to 25 microseconds (onlu the read to internal table)

3rd Program: from 50 100 microseconds.

So, is up to you depending on the case. Definitively go to buffer in multiple reads

With best regards

Matías

0 Kudos

Matias - you must run each program multiple times and then take the lowest run time from each for comparison.

Rob

former_member194613
Active Contributor
0 Kudos

in recent releases and with curent hardware, you will see that the buffer is quite fast, i.e. in the range of a few microseconds. But the internal tables are of course still faster (it must be it is your local memory)!

However, I find the task a bit strange, 5000 accesses to a table with only 10 items looks very unbalanced. Usually I would expect to see only a factor of 10, i.e. 100 accesses. Maybe there is much more to save, if you question your logic and if you are able to reduce the number of accesses. No doing something is always the best performance optimization!

And for any buffered internal table, you should not use a standard but a sorted table. Sooner or later the circumstances change and you have 100 or more records in the internal table. Then the runtimes evolves in the direction of a DB-SELECT and can become even slower, if the table contains several thousand records.

0 Kudos

No doing something is always the best performance optimization!

Not related to this particular discussion, but just some fun for the weekend.

We had a customer that requested performance optimization for top-10 batch jobs in their system.

The first job in the list was somewhat strange from the functional point of view. We have asked the customer what is it doing. They checked in each and every application department, all internal documents and finally found out that nobody needs this job. It has been created years ago, but was not relevant anymore.

So they simply deleted this job from the scheduler 100% performance improvement!