on 02-14-2011 7:04 AM
Hi all,
We are facing issues reg psapundo tablespace. "ORA-30036: unable to extend segment by 8 in undo tablespace 'PSAPUNDO'" error during the cube compression.
According to the BI team, cube has around 6 crore records. We have extended PSAPUNDO tablespace 4 times, the current size is 60GB. Still the same issue occurs.
Please help.
Hi,
if your not getting ORA-1555 it might help to set "undo_retention" to a smaller value.
This can be done system-dynamic with out restarting the DB, in fact even without
stopping a runnning transaction that eats up your undo.
Just a second session with "alter system set undo_retention=1200 scope = memory;"
Lowering that value increases the possibillity of ORA-1555 in concurrent transactions,
so you should reduce parallel activity during that compress run.
Hope this helps
Volker
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi there.
It's very likely not about the size of the InfoCube or the undo tablespace.
But go and check a) database software version and parameter setup, b) size of the request to be compressed and c) partitioning of the E-Facttable.
I'd also check for whether the P-Index on the E-facttable is present and usable as this one is crucial for the compression logic.
If these points don't lead to the solution, open a support message, so that we can check on this.
regards,
Lars
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
>
> Hi Lars,
>
> I am a basis person, have no idea to check the E table. could you give me the steps?
> By the way we are running Oracle 10g 10.2.0.4
>
>
> thanks,
> Abdul.
>
> Edited by: Abdul Sami on Feb 14, 2011 11:27 PM
HI Abdul!
As a basis guy you could easily check the indexes of table "/BIC/E<infocube name>" and see if they are present.
Use transaction SE14 and look for the P-Index. It's a unique index on all KEY-fields of the table.
regards,
Lars
Hi lars,
Yes the P index exists for the cube. One of the bi consultant in a diff company says that the compression should be split and run to avoid psapundo issues.
We are not able to decide, what should be the size of each load which has to be split. Is there any limit like 1 million or 2 million records at a time ?
Thanks,,
Abdul
> Yes the P index exists for the cube. One of the bi consultant in a diff company says that the compression should be split and run to avoid psapundo issues.
>
> We are not able to decide, what should be the size of each load which has to be split. Is there any limit like 1 million or 2 million records at a time ?
Hmm... you can only compress request-wise.... and there is always a commit for every request you compress.
So, basically - no, there's no way to control the number of records.
I'd propose you open a support message for that.
regards,
Lars
User | Count |
---|---|
85 | |
23 | |
11 | |
9 | |
8 | |
5 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.