on 02-24-2010 12:29 PM
Hi all,
We have recently upgraded our BW solution from SAPKW70020 to SAPKW70023. After this upgrade we are experiencing runtime errors when executing a DTP. We have a process chain set up that loads data using three different datasources into three different DSO's. They are executed in parallel.
One of DTPs that loads data into the DSO fails with the runtime error DBIF_RSQL_SQL_ERROR and the exception CX_SY_OPEN_SQL_DB:
Short text: SQL error in the database when accessing a table.
What happened?: The database system detected a deadlock and avoided it by rolling back your transaction.
What can you do?: If possible (and necessary), repeat the last database transaction in the hope that locking the object will not result in another deadlock.
Error analysis: An exception occurred that is explained in detail below. The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB', was not caught in procedure "DD_DD04T_RANGE_DEL" "(FORM)", nor was it propagated by a RAISING clause. Since the caller of the procedure could not have anticipated that the exception would occur, the current program is terminated.
The reason for the exception is: The database system recognized that your last operation on the database would have led to a deadlock. Therefore, your transaction was rolled back to avoid this.
ORACLE always terminates any transaction that would result in deadlock. The other transactions involved in this potential deadlock are not affected by the termination.
If I delete data from the DSO and load manually, everything works ok.
The process chain was running without errors before the upgrade.
Has anyone experienced this problem?
Does anyone know what the table DD04T contain and why this is accessed during the DTP?
Thanks for your help:)
Best regards,
Linda
Hello,
If you face this issue please do the following steps :
path : Double click your DTP -->Click on "Go To"-->Click on " Settings for Batch manager"--> put "Number of Processes (Parallel Processin)" as 2 or 1--Then save.
Run your DTP & it will work.
Thanks,
Koushik
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello,
If you face this issue please do the following steps :
path : Double click your DTP -->Click on "Go To"-->Click on " Settings for Batch manager"--> put "Number of Processes (Parallel Processin)" as 2 or 1--Then save.
Run your DTP & it will work.
Thanks,
Koushik
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Linda,
I had the same problem and used note 750033, it worked fine. I set the named parameter up to 40.
BR
Benni
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
From memeory (i has been a while back) normally the most common reason is that Oracle cannot parallel insert into the same index page
So if you have infopackages or DTPs in parallel and a lot of them have lots of inserts of new dimensions - then you will experience them trying to update the same index page
This then causes the deadlock failure
The workaround is to drop the indexes so no parallel index writes occur - there is no fix last time I checked (4 years back) - or to fire the ipaks serially
Hi,
We are experiencing the deadlock while executing the DTP's from PSA to DSO. We have 5 DTP's that are executed in parallel.
I've had a look at Note 84348, and it says: If the error occurs when jobs are being executed in parallel (usual data loads), the INITRANS value should be set to the maximum number of parallel running jobs. Otherwise, 20 is an appropriate value.
I've asked the Basis team to check the INITRANS value for the table where the deadlock occurs, and they say it is 1.
What are the consequences of changing the INITRANS value?
Thanks:)
BR,
Linda
We have also tried to change the "Setting for batch manager" on the DTP's from 3 to 1, but that did not help.
Edited by: Linda Solberg on Apr 7, 2010 2:29 PM
INITTRANS is an oracle table parameter. Signifies no. of parallel transactions at the oracle block level. For smaller table and excessive writes this parameter should be increased. No harm in increasing to 20.
BTW, we are also facing the deadlock error, especially for large loads. SAP expects us to be optimistic not to get it a second time. Weird logic on their part.
Any solutions.
regards
Samir
I forgot to mention the datasources in use. They are:
- 0FI_GL_10 General Ledger: Leading Ledger Balances
- 3FI_GL_L1_TT General Ledger: Non-leading Ledger local GAAP L1
- 3FI_GL_L2_TT General Ledger: Non-leading Ledger tax GAAP L2
It is for the last one, 3FI_GL_L2_TT, that the DTP loading data from PSA to DSO fails.
We have several other process chain set up to run every night, and they all run ok.
BR,
Linda
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
93 | |
10 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.