cancel
Showing results for 
Search instead for 
Did you mean: 

psapundo issue while cube compression in bi 7.01 ehp1

Former Member
0 Kudos

Hi all,

We are facing issues reg psapundo tablespace. "ORA-30036: unable to extend segment by 8 in undo tablespace 'PSAPUNDO'" error during the cube compression.

According to the BI team, cube has around 6 crore records. We have extended PSAPUNDO tablespace 4 times, the current size is 60GB. Still the same issue occurs.

Please help.

Accepted Solutions (0)

Answers (2)

Answers (2)

volker_borowski2
Active Contributor
0 Kudos

Hi,

if your not getting ORA-1555 it might help to set "undo_retention" to a smaller value.

This can be done system-dynamic with out restarting the DB, in fact even without

stopping a runnning transaction that eats up your undo.

Just a second session with "alter system set undo_retention=1200 scope = memory;"

Lowering that value increases the possibillity of ORA-1555 in concurrent transactions,

so you should reduce parallel activity during that compress run.

Hope this helps

Volker

lbreddemann
Active Contributor
0 Kudos

Hi there.

It's very likely not about the size of the InfoCube or the undo tablespace.

But go and check a) database software version and parameter setup, b) size of the request to be compressed and c) partitioning of the E-Facttable.

I'd also check for whether the P-Index on the E-facttable is present and usable as this one is crucial for the compression logic.

If these points don't lead to the solution, open a support message, so that we can check on this.

regards,

Lars

Former Member
0 Kudos

Hi Lars,

I am a basis person, have no idea to check the E table. could you give me the steps?

By the way we are running Oracle 10g 10.2.0.4

thanks,

Abdul.

Edited by: Abdul Sami on Feb 14, 2011 11:27 PM

lbreddemann
Active Contributor
0 Kudos

>

> Hi Lars,

>

> I am a basis person, have no idea to check the E table. could you give me the steps?

> By the way we are running Oracle 10g 10.2.0.4

>

>

> thanks,

> Abdul.

>

> Edited by: Abdul Sami on Feb 14, 2011 11:27 PM

HI Abdul!

As a basis guy you could easily check the indexes of table "/BIC/E<infocube name>" and see if they are present.

Use transaction SE14 and look for the P-Index. It's a unique index on all KEY-fields of the table.

regards,

Lars

Former Member
0 Kudos

Hi lars,

Yes the P index exists for the cube. One of the bi consultant in a diff company says that the compression should be split and run to avoid psapundo issues.

We are not able to decide, what should be the size of each load which has to be split. Is there any limit like 1 million or 2 million records at a time ?

Thanks,,

Abdul

lbreddemann
Active Contributor
0 Kudos

> Yes the P index exists for the cube. One of the bi consultant in a diff company says that the compression should be split and run to avoid psapundo issues.

>

> We are not able to decide, what should be the size of each load which has to be split. Is there any limit like 1 million or 2 million records at a time ?

Hmm... you can only compress request-wise.... and there is always a commit for every request you compress.

So, basically - no, there's no way to control the number of records.

I'd propose you open a support message for that.

regards,

Lars