on 01-30-2007 7:59 AM
Hi everyone,
we are on SP 10 together wih Oracle 10.2.0.2.0. In our development system I made already some big test loads (~Mio entries) from Data Store to cube. As sometimes in 3.0B we get a self-deadlock because processes killed each other, but now it happens more often. The bad thing with such an error from a Data Store to cube, that there is no possibility to make a manual update so a restart is necessary.
Has anyone suffered the same? Any proposals others than too less data or use PSA and Update-rules?
My current solution is to set the data-package size so high that the load is done in one package, what is really not a good solution. Is there a possibility to reduce the paralellility to one at a cube load other than my method?
Best regards
Harald.
Hi Harald,
Kindly look into the following notes :
980555 ORA 600 [kdtdelrow-2] during insert with deadlock
1015152 ORA 600 [kdtdelrow-2] during insert in Oracle 10.2.0.2
1003217 ORA-600 [kdiblLockRange:not validated] during an insert
hope this helps,
Regards
KK
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Almost all the time we are able to avoid this problem by <b>deleting indexes</b> before we start upload to cube.
With rgds,
Anil Kumar Sharma .P
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
88 | |
10 | |
10 | |
9 | |
7 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.