Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Buffer trace list shows only variables of executed stmts

Former Member
0 Kudos

Hello,

I switched on an SQL trace and Buffer trace.

If I use both, all SQL stmts targeting on buffered tables are logged.

I switched on the trace for some time and took a look at the list then...

For most statements that were executed on buffered tables (appearing with blue backgroud)only the OPEN stmts appear. Most of the corresponding PREPARE statements that precede the OPEN stmt do not appear. If I plan to reuse these statements I will have to use or develop an SQL parser (hope there exists already one) and/or I must search for the PREPARE stmts in the trace to get to know how the stmt looks like...which correspondes to an OPEN stmt...

Moreover each OPEN stmt shows only the variables or parameters. But it shows not the whole SQL stmt!

How can I see the corresponding SQL stmts?

Thanks very much...

9 REPLIES 9

christian_wohlfahrt
Active Contributor
0 Kudos

Hi Holger!

A general rule is: use fields in the same order as defined in SE11 of the database table.

It makes also more sense, to avoid selects instead of using them identical in one program (two programs is of course a different situation).

But otherwise individual restrictions of selected amount of fields (no select *), most restrictive where clause (including as much index fields as possible) brings much more performance advantages than using identical select statements to avoid a prepare (costs 300 mikroseconds, more than I expected - still every fetch costs at least as much).

If you like to give many different developers some help to use identical (optimized) select statements: write a function module (class method) to access the tables. Then coding of SQL is centralized, all use identical access.

To identify the SQL-statements: use F5 (Display all positions in ABAP programs) on open statement in trace. Then you will see the statement clearly. (At least in my system it's working and jumps to correct position in programs.)

Regards,

Christian

0 Kudos

Thanks for answering so fast...

F5 works. I would like to see the origin SQL statements for each OPEN stmt. Therefore I have to look manually for each OPEN stmt in the corresponding program how the origin SQL statement looks like...

If I do it for 10000 stmts, it is unpossible to do it manually....

Have you got an idea how to get automatically for each OPEN stmt the corresponding SQL stmt without looking each corresponding program?

Thanks...

0 Kudos

Have you checked ST04?

Click on Detail analysis menu and then on the button SQL Request to analyse the shared cursor buffer.

There you cannot see each open statement but you will get a better overall view on how effective the selects are.

Christian

0 Kudos

Hi!

Looks like you really want to spend some effort in optimization of a lot of programs.

So this way might be an option for you:

Debug SQL-trace display. In internal table 'buf_trace_tab' information necessary information is stored. Save this table 'as Excel' (CTRL+F11).

Make an extract with filter 'OPEN' for field operation. Make a report, use coding of form 'LOAD_REPORT' of program 'SAPLSSQ0' as example.

After LOAD REPORT you can find the source code include in RCONT_TAB-SOURCE. Instead of a call function 'editor_program', you can use 'READ REPORT RCONT_TAB-SOURCE into itab'.

Loop starting at CURSORLINE and print some lines (until next '.').

This should bring a list, where all selects are displayed.

With a little bit more filtering and sorting, this might even get useful.

Of course you can also download this into a file -> more analysis can be done with Excel / Access (/OpenOffice).

Regards,

Christian

0 Kudos

Thanks for your answers Christian and Christian

I do need all executed SQL stmts that were executed on bufferd tables (valid or invalid is unimportant). I would like to use these stmts and execute it in annother system for test purposes. In that system I would adjust buffer synchronisation time and some other parameters. I would like to check which affect different parameters have on the transferred bytes between database server and application server.

Therefore I think the shnapshot of executed sql statements must be the best choice...

DO you think it is a good way to do this?

0 Kudos

Hi Holger!

No, sounds not good. Normally some programs (job list for daily tasks) or transactions (CATT for article changes, order creations...) are executed to produce work load into a system.

If you gather some (/a lot) of SQL-statements, put them into a test report and execute this, then you have a (totally?) different situation. E.g. no further internal tables are build up -> no additional memory consumption -> less memory space problems.

Also time between access is different -> tuning of buffer synchronization is artificial (but a simple rule: higher number of seconds, less refreshes and re-read from DB).

I guess, 'other parameter' are linked to RAM settings of application server. Especially here real programs cause different situation than SQL execution. Also DB changes produce additional work load on DB - so tuning has to search for a different balance between DB/application server load then a purely 'read'-system.

E.g. number range buffering or enqueue problems want be analyzed with your approach - but why spent time and work, if your problem might be in a different corner?

I head several customers with critical performance problems (before I came, of course ), but never PREPARE of SQL-statements was a topic.

You should identify bottlenecks and concentrate your analysis on the slow parts - otherwise you might try to get improvements in an area, which takes anyway only 10% of runtime - but 50% improvement can't be noticed, if 90% are still the same slow way.

Seldom pure (intensive) system tuning will help - of course some 'basic' stuff should be tuned, but soon the greater part might be change of SQL statements (using indices) / process design in programs (switch from select ... endselect to select into table and similar stuff).

But general advice is difficult, maybe in your situation it's looking different.

Regards,

Christian

0 Kudos

Hello Christian,

thanks for your helpful answer.

The traced SQL stmts would not be that representative, especially for other systems...

You are right, that checking SQL stmts about their usage of indices and programs about their correct syntax in terms of a better performance must be checked...

But I would like to know how much the system load can be decreased just by concentrating on buffered tables. I would like to check how much performance gain is possible here... I think the important factor or metric is the number of bytes that are transferred between database and application server. In most database systems, this is the bottleneck...

Parameters like the synchronisation time and the buffering type are the only ones I could adjust in the system. Moreover I can decide whether buffering a table or not. The three factors influence whether more or less bytes must be transferred between the database and the application server.

Now I would like to accomplish some test cases.

First I would adjust the parameter 'synchronisation time'. Then I would like to execute some representative SQL stmts on generic, fully and single record buffered tables. Next I take a look how many data was transferred. I would like to check how much data is transferred. An example: A fully buffered table is synchronized every 60 seconds. It is written every 59 sec (for example). If it is read 2 times after the change and the synchronisation twoe reloads are needed. If the syncrhonisation time is only 120 seconds, just one reload will be done. Less bandwidth is consumed. The probability of reading invalid data is getting higher!

This test case is more interesting for single and generic buffered tables, because only records are invalidated. The synchronisation time could be shorter, because if data is invalid, only the region or the records will be reloaded. This consumes not as much bandwidth as for fully bufferd tables...

The final question is, if I use different synchronisation time for generic, fully and single record buffered tables. What would this bring?

I know that it is not possible, but I would like to know it...

Moreover I do not know the reasons, why SAP had the idea to differ between three types of buffering. Why does the synchronisation process exists and all that stuff...

I never found the reasons anywhere...

For such a test case, I need representative SQL statements. Do you think I should run a benchmark or anything like that?

I am only interested in transferred bytes, not in request response times (which are hard to measure).

Thanks for your help in advance,

Holger

0 Kudos

Hi Holger!

I would take a more theoretical approach - you won't get much more information with a lot of artificial tests. You put too much assumptions into it (e.g. change every 59 sec) to get independent results (objective, not subjective).

There are some additional parameters / reasons:

Why difference between full / generic / single? There are several effects: with full buffering, you use most buffer space, most data transfer in case of invalidation, least validation status memory. Generic buffer is the middle way, with single buffering you use least buffering memory, least data transfer in case of invalidation but most validation status memory.

So overhead is crucial for single buffer strategy. This part is exchanged every minute (or whatever you maintain), here more bad than good can result. That's only useful in case of very large tables, where normally only a small number of (not generic specified) entries are used often. Sorry, currently I can't imagine an example.

Generic buffering might help in case of several clients, or sales organizations. If one part of customers only access one sales organization, then a separate logon-group with own application server will just buffer ('fully') the needed sales organization.

If you think about change periods lower than 5 minutes, don't buffer. Your hit / miss ratio is too bad, there will be other tables with better ratio which like to be buffered first.

The buffer synchronization check is executed in any case: read or no read, write or no write. Here constant workload (data transfer) is produced. Last but not least because of this I would take 1 or 2 minutes - anything else is anyway philosophical.

The typical 'I change something and jump immediately into the transaction again, to check correct booking'-user has normally booking on same application server as his dialog process. Only seldom update is done remote on other servers - but then with such heavy workload short synchronization times are any forbidden again.

Also grouping users into areas like FI, MM, SD, Logistics and distributing them on different application servers will bring much more effects than fine tuning some seconds. Then invalidation because of missing buffer memory space won't happen as often as mixed user groups / server.

To assign the (at least in early times) restricted total available buffer space to most important tables is an interesting part. Also the decision fully / generic / not at all buffered because of change probability / memory consumption has more (visible) effects then I expect of your tuning.

Also some business knowledge might be important: these bunch tables are changed heavily in January (because of new contracts...), then they stay unchanged the rest of year - but don't make your analysis in January without this background...

So as summary: your approach still looks very academic. Just have a look in typical analyzing tools (ST04, SM50, SM66,...) and start with the bottlenecks. Normally even simple rules (no select *) are broken and cause much more problems than something like synchronization time.

I hope, I could enlighten some more aspects,

regards,

Christian

0 Kudos

Hi Christian,

my approach is more academic.

First of all, I tried to find out what is the benefit of buffering a table or not. I tried to develop a benefit cost model which determines the benefit or rather value of a buffered table. I tried to develop a equation comprising all relevant parameters:

• Table size

• Number of SELECT and SELECT SINGLE statements executed on buffer

• Number of all executed SELECT and SELECT SINGLE statements

• Number of DELETE, INSERT and UPDATE statements

• Number of transferred records or rather transferred data volume

• Type of buffering

• Time of the last recent access

• Growth and expected size of the table

• Number and pattern of near future accesses

The benefit cost model (equation) would say how valuable it is to buffer a table or not. A big problem is the weightning of parameters within the equation. A bigger problem remains. Verifying of the benefit cost model with its weighted parameters. So we could evict inefficient tables from the buffer in order to increase the performance.

Another approach is to determine how many bytes must have been transferred from database to application server for the buffered table (in case of bypassing select stmts and invalidations). Then we measure how mnay bytes would have been transferred if the table was hold in the database.

If the first value of bytes would exceed the second number of bytes, it is not good to buffer the table.

So we could evict inefficient tables from the buffer in order to increase the performance.

Next I asked myself whether it makes sense to adjust parameters concering buffering. Sense in the way of a significant performance increase. The only parameters I could adjust are sync time and buffering type. Adjusting these parameters would be a more general approach than the other two ones.

What I would like to know also is whether the whole sync process and buffering types (established in SAP) should have been organized different, or if there are any things that could be changed in the whole process concerining buffering types and sync process....

What do you think....?

Thanks very much for any sugestions in advance...

Holger