Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Native SQL performance issue

Former Member
0 Kudos

Hello Experts,

I'm getting a timeout when executing a Process chain. As far as I can see the problem is caused by native SQL.

Here is where the dump is generated.

-->call function 'DB_COMMIT'

FUNCTION DB_COMMIT.

*"----


""Lokale Schnittstelle:

*"----


EXEC SQL.

COMMIT WORK

ENDEXEC.

ENDFUNCTION.

Is Native SQL slower than Open SQL?. How can I resolve this issue?

Thanks in advance and regards!!!!

1 ACCEPTED SOLUTION

Former Member
0 Kudos

Then I doubt if the COMMIT is the problem. Can you post the portion of the dump that gives both the error and the code where it dumps?

Rob

13 REPLIES 13

Former Member
0 Kudos

I ran a test a number of years ago that showed that native SQL was a bit slower than open SQL, but I think a COMMIT is just a COMMIT, wherever it is done.

How many database records will be changed when this COMMIT is executed?

Rob

0 Kudos

Hi Rob,

There's only 1 record to be changed.

This is what I have before the commit statement

exec sql.

INSERT into "/BIC/DICCCB_C044"

(

"DIMID"

, "SID_0CLEAR_DATE"

, "SID_0REASON_CDE"

, "SID_0PYMT_METH"

, "SID_0PMNT_BLOCK"

, "SID_0PMNTTRMS"

, "SID_0COUNTRY"

, "SID_C_MCOMP"

, "SID_0CLR_DOC_NO"

, "SID_0DOC_NUMBER"

, "SID_C_ZUONR"

, "SID_C_CBSGTXT"

, "SID_C_ANOCOMP"

, "SID_C_MESCOMP"

, "SID_0DCINDIC"

)

values (

:L_S_D4-DIMID

,

:L_S_D4-SID_0CLEAR_DATE

,

:L_S_D4-SID_0REASON_CDE

,

:L_S_D4-SID_0PYMT_METH

,

:L_S_D4-SID_0PMNT_BLOCK

,

:L_S_D4-SID_0PMNTTRMS

,

:L_S_D4-SID_0COUNTRY

,

:L_S_D4-SID_C_MCOMP

,

:L_S_D4-SID_0CLR_DOC_NO

,

:L_S_D4-SID_0DOC_NUMBER

,

:L_S_D4-SID_C_ZUONR

,

:L_S_D4-SID_C_CBSGTXT

,

:L_S_D4-SID_C_ANOCOMP

,

:L_S_D4-SID_C_MESCOMP

,

:L_S_D4-SID_0DCINDIC

)

endexec.

Thanks!!!

Former Member
0 Kudos

Then I doubt if the COMMIT is the problem. Can you post the portion of the dump that gives both the error and the code where it dumps?

Rob

0 Kudos

Rob,

this is the dump info

Erro tempo execução TIME_OUT

What happened?

The program "GP479JAS7VRSIRBLRO7A78XQDQV" has exceeded the maximum permitted

runtime without

interruption, and has therefore been terminated.

Information on where terminated

The termination occurred in the ABAP program "GP479JAS7VRSIRBLRO7A78XQDQV" in

"GET_DIMID_D4".

The main program was "SAPMSSY1 ".

The termination occurred in line 3869 of the source code of the (Include)

program "GP479JAS7VRSIRBLRO7A78XQDQV"

of the source code of program "GP479JAS7VRSIRBLRO7A78XQDQV" (when calling the

editor 38690).

3858 :L_S_D4-SID_C_CBSGTXT

3859 ,

3860 :L_S_D4-SID_C_ANOCOMP

3861 ,

3862 :L_S_D4-SID_C_MESCOMP

3863 ,

3864 :L_S_D4-SID_0DCINDIC

3865 )

3866

3867 endexec.

3868 c_rc = sy-subrc.

===== > call function 'DB_COMMIT'. "geändert MKKP 8

3870 exec sql. set connection default endexec.

3871 IF c_RC = 4.

3872 clear c_rc.

3873 SELECT DIMID FROM /BIC/DICCCB_C044

3874 up to 1 rows

3875 INTO L_S_D4-DIMID WHERE

3876 SID_0CLEAR_DATE =

3877 L_S_D4-SID_0CLEAR_DATE

3878 AND

3879 SID_0REASON_CDE =

3880 L_S_D4-SID_0REASON_CDE

3881 AND

3882 SID_0PYMT_METH =

Edited by: Rob Burbank on Jul 21, 2010 3:04 PM

0 Kudos

Well, I have no idea. I guess it could be some sort of database connection problem. If you don't get any positive reponse today, bump this to the top again tomorrow. there are a number of people who follow this forum who know a lot more about databses than I do.

(This isn't being done in a loop is it?)

Rob

0 Kudos

Well, Yes the subroutine It's within a loop when updating the infocube.

607 LOOP AT I_T_CUBE INTO L_S_CUBE.

608 l_recno = sy-tabix.

609 MOVE-CORRESPONDING L_S_CUBE TO G_S_/BIC/FICCCB_C04.

610 IF l_subrc1 = 0 or i_s_minfo-simulation = rs_c_true.

611 PERFORM GET_DIMID_DP USING L_S_CUBE

I was not able to determine how many records is processing as that data is not able in the dump analysis. I'm trying to debug this and find something

thank you for your help!

0 Kudos

Well then, you should be able to get around this by putting in a counter and doing a COMMIT for each set of records (say 1,000 or 10,000).

Note that in cases of rerunning this, you will have to get rid of the already updated records.

Doing a COMMIT for each record can be expensive.

Rob

Former Member
0 Kudos

<div style="text-align:left">Rob wrote:

I ran a test a number of years ago that showed that native SQL was a bit slower than open SQL, but I think a COMMIT is just a COMMIT, wherever it is done.</div>

Please correct me if I'm wrong, but I see (at least) the following differences between an Open SQL commit and a native SQL commit: The native SQL commit does not

<ul style="list-style:circle!important">

<li> trigger any registered update routines or functions or</li>

<li>release any logical locks set via the SAP enqueue mechanism</li>

</ul>

Thus if we measure the overall time I'd always expect that the Open SQL commit is slower, because it does perform the native SQL commit and possibly further actions. (Assuming that we really look at overall time and not some perceived fast update, because it's done asynchronously in an update process.)

<div style="text-align:left">Emanuel wrote:

I'm getting a timeout when executing a Process chain.</div>

Sounds like you're running your code in a dialog process (so either foreground user processing or an RFC) and thus are subject to the maximum allowed runtime configured via profile parameter rdisp/max_wprun_time. Now if you get a timeout it's obviously misleading to just look at the statement and surrounding coding where it dumped. I.e. you could easily be looking at some very fast and efficient coding, that just happens to be in the wrong place at the wrong time...

So to avoid jumping to wrong conclusions it's best to follow the instructions in thread for doing an analysis of your performance issue. I actually also like to "eyeball" the process first in SM50, which often gives you also a better starting point for further investigations...

Now as Rob pointed out already doing a commit per record is not a smart solution for mass updates.

If you simply have a long running process it might also be interesting to check if you could substitute your native SQL commit by an Open SQL [commit work|http://help.sap.com/abapdocu_70/en/ABAPCOMMIT.htm]. As far as I understand the Open SQL commit (and only that) resets the clock for checking against the maximum allowed runtime (thus effectively allowing you to have a dialog process that runs longer than the limit imposed by rdisp/max_wprun_time. Of course this is a risky business if you don't fully understand the application, because the database commit does less (see above) than the Open SQL commit.

@All: Please correct me if any of my statements are wrong...

Cheers, harald

0 Kudos

>

> Thus if we measure the overall time I'd always expect that the Open SQL commit is slower, because it does perform the native SQL commit and possibly further actions. (Assuming that we really look at overall time and not some perceived fast update, because it's done asynchronously in an update process.)

That's what I expected and was surprised when it didn't pan out that way; however, this was in an earlier release (3.X or 4.X), so would be hard to test now. I think a later test I did showed that this is no longer the case.

I didn't think of the extra stuff going on in the open SQL commit, so I guess it would be a bit slower.

In any event, I don't think the use of native rather than open SQL is the case here.

Rob

0 Kudos

> As far as I understand the Open SQL commit (and only that) resets the clock for checking against the maximum allowed runtime (thus effectively allowing you to have a dialog process that runs longer than the limit imposed by rdisp/max_wprun_time. Of course this is a risky business if you don't fully understand the application, because the database commit does less (see above) than the Open SQL commit.

For more information, read [SAP Note 25528 - Parameter rdisp/max_wprun_time|http://service.sap.com/sap/support/notes/25528]

About the issue, I really wonder why there's a timeout during the database commit (the "native sql commit" as was said above). From my point of view, all the database time is spent during the updates done before the commit, the commit adds simply a "line" to say "everything that is before this line is committed". So, I think it would be best to ask the database administrator to look for database issues.

Also, make sure to read SAP notes, and contact SAP support if you found nothing.

Moreover, it would be interesting to know what database is used (oracle, ms sql server, ...)

0 Kudos

Hi Rob,

Hi Harald,

the native commit work should be faster than the open sql commit (which does more work).

A little test shows this as well:


  DO 3000 TIMES.
    INSERT t100 FROM wa.
    EXEC SQL.
      COMMIT WORK
    ENDEXEC.
    DELETE t100 FROM wa.
    EXEC SQL.
      COMMIT WORK
    ENDEXEC.
  ENDDO.

Call                                                 No.       Gross =         Net Gross (%) Net (%) NetDB(%) Program (called program)         Type
Insert T100                                        3.000  2.216.362  =  2.216.362       3,2     3,2      3,2  Z_COMMIT_TEST                    DB
NatSQL Commit                                      3.000  8.193.734  =  8.193.734      12,0    12,0     12,0  Z_COMMIT_TEST                    DB
Delete T100                                        3.000  2.302.980  =  2.302.980       3,4     3,4      3,4  Z_COMMIT_TEST                    DB
NatSQL Commit                                      3.000  8.668.774  =  8.668.774      12,7    12,7     12,7  Z_COMMIT_TEST                    DB

  DO 3000 TIMES.
    INSERT t100 FROM wa.
    COMMIT WORK.
    DELETE t100 FROM wa.
    COMMIT WORK.
  ENDDO.

Call                                                 No.       Gross =         Net Gross (%) Net (%) NetDB(%) Program (called program)         Type
Insert T100                                        3.000  2.389.157  =  2.389.157       3,5     3,5      3,5  Z_COMMIT_TEST                    DB
Perform(Ext) %_BEFORE_COMMIT                       3.000    170.548             0       0,2     0,0      0,0  SAPMSSY0
Perform(Ext) %_COMMIT                              3.000    114.361             0       0,2     0,0      0,0  SAPMSSY0
Commit Work                                        3.000 20.923.157  = 20.923.157      30,6    30,6     30,6  Z_COMMIT_TEST                    DB
Perform(Ext) %_AFTER_COMMIT                        3.000    351.661             0       0,5     0,0      0,0  SAPMSSY0
Delete T100                                        3.000  2.513.981  =  2.513.981       3,7     3,7      3,7  Z_COMMIT_TEST                    DB
Perform(Ext) %_BEFORE_COMMIT                       3.000    170.859             0       0,2     0,0      0,0  SAPMSSY0
Perform(Ext) %_COMMIT                              3.000    113.374             0       0,2     0,0      0,0  SAPMSSY0
Commit Work                                        3.000 19.589.516  = 19.589.516      28,6    28,6     28,6  Z_COMMIT_TEST                    DB
Perform(Ext) %_AFTER_COMMIT                        3.000    349.961             0       0,5     0,0      0,0  SAPMSSY0

We can see, that

the open sql commit takes longer AND does more work (addional vissible AND unvisible calls).

They have clearly a different functionality. The open SQL commit could do (depends), amongst other things:

trigger update, process on comimit, qrfc trfc, call events (raise_commit_requested/finished, raise_transaction_finished),

call function 'BDC_END_TRANSACTION', call function 'SPI_AGENT_COMMIT', closes files, could prepare roll out, ...

if you are really intersted, switch on SYSTEM debugging and debug the OPEN SQL statement commit work

However back to the problem (whih is NOT the commit i think):

as already mentioned the number of calls (is each record commited?) and the amount of work to be processed

in a dialog process should be the focus in my opinion.

Kind regards,

Hermann

0 Kudos

Hi Emannuel,

This problem persist? Please give feedback to thread.

It was excelent clarified that COMMIT shouldn't the root cause of the problem.

I`d like to ask you why the process chain are running in foreground. Are you trying to debug it or running it's mandatory run it syncronous or triggered by web application?

Normally these routines of process chain run all in background channels which the timeout doesn't happens.

Try to schedule it and look if the dump disapear, if yes we can focus on long execution, but it maybe a "normal" behavior due to large data.

Regards, Fernando Da Ró

0 Kudos

Hello All,

Sorry for the delay in the reply. I was trying to get more information about this. Actually I'm not the one executing this process, I was just told that this is being run in background and so far it's not abending anymore. It was probably due to the amount of data processed. I have checked that they are now processing less data.

thank you all for your replies!!