Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Run Time Analysis And Performance Tuning.

Former Member
0 Kudos

Hi All,

What is the main difference between Run Time Analysis & Performance Tuning.

Thanks in Advance.

Regards,

Ramana Prasad

3 REPLIES 3

Former Member
0 Kudos

Run Time analysis is used to find out the performance of the program, how ur sql staments are working etc.

whereas performance tuning works with improving the the code developed , and how better we can improve the performance of program.

AjithC
Employee
Employee
0 Kudos

Hi Ramana,

Run time analysis is done to measure the performance of the program in terms of cpu time, response time, memory, rfc calls, db hits etc. Transaction STAD is used for the same.

Performance tuning is to make your program more optimized. You have to perform a run time analysis to first check the performance and then decide to optimize or not based on the results. SQL & RFC optimizations can be done using transaction ST05 and memory snap shots for memory optimization.

Regards

Ajith Chandran

Former Member
0 Kudos

Hi,

You can see a report performance in SE30(Runtime analysis)and

SQLtrace(ST05).

ST05 tells you the list of selected statements.

You should remember some points when you tuning the code

- Use the GET RUN TIME command to help evaluate performance. It's

hard to know whether that optimization technique REALLY helps unless you

test it out. Using this tool can help you know what is effective, under what

kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so

you should use it to test small pieces of your program, rather than the

whole program.

- *Generally, try to reduce I/O first, then memory, then CPU activity.

*I/O operations that read/write to hard disk are always the most

expensive operations. Memory, if not controlled, may have to be written to

swap space on the hard disk, which therefore increases your I/O read/writes

to disk. CPU activity can be reduced by careful program design, and by using

commands such as SUM (SQL) and COLLECT (ABAP/4).

- Avoid 'SELECT *', especially in tables that have a lot of fields.

Use SELECT A B C INTO instead, so that fields are only read if they are

used. This can make a very big difference.

- Field-groups can be useful for multi-level sorting and displaying.

However, they write their data to the system's paging space, rather than to

memory (internal tables use memory). For this reason, field-groups are only

appropriate for processing large lists (e.g. over 50,000 records). If

you have large lists, you should work with the systems administrator to

decide the maximum amount of RAM your program should use, and from that,

calculate how much space your lists will use. Then you can decide whether to

write the data to memory or swap space.

- Use as many table keys as possible in the WHERE part of your select

statements.

- Whenever possible, design the program to access a relatively

constant number of records (for instance, if you only access the

transactions for one month, then there probably will be a reasonable range,

like 1200-1800, for the number of transactions inputted within that month).

Then use a SELECT A B C INTO TABLE ITAB statement.

- Get a good idea of how many records you will be accessing. Log into

your productive system, and use SE80 -> Dictionary Objects (press Edit),

enter the table name you want to see, and press Display. Go To Utilities ->

Table Contents to query the table contents and see the number of records.

This is extremely useful in optimizing a program's memory allocation.

- Try to make the user interface such that the program gradually

unfolds more information to the user, rather than giving a huge list of

information all at once to the user.

- Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS

is the number of records you expect to be accessing. If the number of

records exceeds NUM_RECS, the data will be kept in swap space (not memory).

- Use SELECT A B C INTO TABLE ITAB whenever possible. This will read

all of the records into the itab in one operation, rather than repeated

operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement.

Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the

number of records you expect to access.

- If the number of records you are reading is constantly growing, you

may be able to break it into chunks of relatively constant size. For

instance, if you have to read all records from 1991 to present, you can

break it into quarters, and read all records one quarter at a time. This

will reduce I/O operations. Test extensively with GET RUN TIME when using

this method.

- Know how to use the 'collect' command. It can be very efficient.

- Use the SELECT SINGLE command whenever possible.

- Many tables contain totals fields (such as monthly expense totals).

Use these avoid wasting resources by calculating a total that has already

been calculated and stored.

Try to avoid joins more than 2 tables.

For all entries

The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of

entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the

length of the WHERE clause.

The plus

Large amount of data

Mixing processing and reading of data

Fast internal reprocessing of data

Fast

The Minus

Difficult to program/understand

Memory could be critical (use FREE or PACKAGE size)

Some steps that might make FOR ALL ENTRIES more efficient:

Removing duplicates from the the driver table

Sorting the driver table

If possible, convert the data in the driver table to ranges so a BETWEEN statement is used instead of and OR statement:

FOR ALL ENTRIES IN i_tab

WHERE mykey >= i_tab-low and

mykey <= i_tab-high.

Nested selects

The plus:

Small amount of data

Mixing processing and reading of data

Easy to code - and understand

The minus:

Large amount of data

when mixed processing isn’t needed

Performance killer no. 1

Select using JOINS

The plus

Very large amount of data

Similar to Nested selects - when the accesses are planned by the programmer

In some cases the fastest

Not so memory critical

The minus

Very difficult to program/understand

Mixing processing and reading of data not possible

Use the selection criteria

SELECT * FROM SBOOK.

CHECK: SBOOK-CARRID = 'LH' AND

SBOOK-CONNID = '0400'.

ENDSELECT.

SELECT * FROM SBOOK

WHERE CARRID = 'LH' AND

CONNID = '0400'.

ENDSELECT.

Use the aggregated functions

C4A = '000'.

SELECT * FROM T100

WHERE SPRSL = 'D' AND

ARBGB = '00'.

CHECK: T100-MSGNR > C4A.

C4A = T100-MSGNR.

ENDSELECT.

SELECT MAX( MSGNR ) FROM T100 INTO C4A

WHERE SPRSL = 'D' AND

ARBGB = '00'.

Select with view

SELECT * FROM DD01L

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

SELECT SINGLE * FROM DD01T

WHERE DOMNAME = DD01L-DOMNAME

AND AS4LOCAL = 'A'

AND AS4VERS = DD01L-AS4VERS

AND DDLANGUAGE = SY-LANGU.

ENDSELECT.

SELECT * FROM DD01V

WHERE DOMNAME LIKE 'CHAR%'

AND DDLANGUAGE = SY-LANGU.

ENDSELECT.

Select with index support

SELECT * FROM T100

WHERE ARBGB = '00'

AND MSGNR = '999'.

ENDSELECT.

SELECT * FROM T002.

SELECT * FROM T100

WHERE SPRSL = T002-SPRAS

AND ARBGB = '00'

AND MSGNR = '999'.

ENDSELECT.

ENDSELECT.

Select … Into table

REFRESH X006.

SELECT * FROM T006 INTO X006.

APPEND X006.

ENDSELECT

SELECT * FROM T006 INTO TABLE X006.

Select with selection list

SELECT * FROM DD01L

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

ENDSELECT

SELECT DOMNAME FROM DD01L

INTO DD01L-DOMNAME

WHERE DOMNAME LIKE 'CHAR%'

AND AS4LOCAL = 'A'.

ENDSELECT

Key access to multiple lines

LOOP AT TAB.

CHECK TAB-K = KVAL.

" ...

ENDLOOP.

LOOP AT TAB WHERE K = KVAL.

" ...

ENDLOOP.

Copying internal tables

REFRESH TAB_DEST.

LOOP AT TAB_SRC INTO TAB_DEST.

APPEND TAB_DEST.

ENDLOOP.

TAB_DEST[] = TAB_SRC[].

Modifying a set of lines

LOOP AT TAB.

IF TAB-FLAG IS INITIAL.

TAB-FLAG = 'X'.

ENDIF.

MODIFY TAB.

ENDLOOP.

TAB-FLAG = 'X'.

MODIFY TAB TRANSPORTING FLAG

WHERE FLAG IS INITIAL.

Deleting a sequence of lines

DO 101 TIMES.

DELETE TAB_DEST INDEX 450.

ENDDO.

DELETE TAB_DEST FROM 450 TO 550.

Linear search vs. binary

READ TABLE TAB WITH KEY K = 'X'.

READ TABLE TAB WITH KEY K = 'X' BINARY SEARCH.

Comparison of internal tables

DESCRIBE TABLE: TAB1 LINES L1,

TAB2 LINES L2.

IF L1 <> L2.

TAB_DIFFERENT = 'X'.

ELSE.

TAB_DIFFERENT = SPACE.

LOOP AT TAB1.

READ TABLE TAB2 INDEX SY-TABIX.

IF TAB1 <> TAB2.

TAB_DIFFERENT = 'X'. EXIT.

ENDIF.

ENDLOOP.

ENDIF.

IF TAB_DIFFERENT = SPACE.

" ...

ENDIF.

IF TAB1[] = TAB2[].

" ...

ENDIF.

Modify selected components

LOOP AT TAB.

TAB-DATE = SY-DATUM.

MODIFY TAB.

ENDLOOP.

WA-DATE = SY-DATUM.

LOOP AT TAB.

MODIFY TAB FROM WA TRANSPORTING DATE.

ENDLOOP.

Appending two internal tables

LOOP AT TAB_SRC.

APPEND TAB_SRC TO TAB_DEST.

ENDLOOP

APPEND LINES OF TAB_SRC TO TAB_DEST.

Deleting a set of lines

LOOP AT TAB_DEST WHERE K = KVAL.

DELETE TAB_DEST.

ENDLOOP

DELETE TAB_DEST WHERE K = KVAL.

Tools available in SAP to pin-point a performance problem

The runtime analysis (SE30)

SQL Trace (ST05)

Tips and Tricks tool

The performance database

Optimizing the load of the database

Using table buffering

Using buffered tables improves the performance considerably. Note that in some cases a stament can not be used with a buffered table, so when using these staments the buffer will be bypassed. These staments are:

Select DISTINCT

ORDER BY / GROUP BY / HAVING clause

Any WHERE clasuse that contains a subquery or IS NULL expression

JOIN s

A SELECT... FOR UPDATE

If you wnat to explicitly bypass the bufer, use the BYPASS BUFFER addition to the SELECT clause.

Use the ABAP SORT Clause Instead of ORDER BY

The ORDER BY clause is executed on the database server while the ABAP SORT statement is executed on the application server. The datbase server will usually be the bottleneck, so sometimes it is better to move thje sort from the datsbase server to the application server.

If you are not sorting by the primary key ( E.g. using the ORDER BY PRIMARY key statement) but are sorting by another key, it could be better to use the ABAP SORT stament to sort the data in an internal table. Note however that for very large result sets it might not be a feasible solution and you would want to let the datbase server sort it.

Avoid ther SELECT DISTINCT Statement

As with the ORDER BY clause it could be better to avoid using SELECT DISTINCT, if some of the fields are not part of an index. Instead use ABAP SORT + DELETE ADJACENT DUPLICATES on an internal table, to delete duplciate rows.

Regards

Sudheer