Hi all,
we have a compression run on an InfoCube (Standard) which should compress the first request of the F-table into an empty E-table. The request has about 54 million rows.
This run - when writing this post - has been running now since more than 60 hours!
We are on BI 7.0 (SP 24) and Oracle 11.2.0.2
The secondary indexes on the E-table have not been deleted; e. g. there are still indexes like /BIC/E[cube]~[nnn], where [nnn] is for instance 010, 020, and so on.
The ~0 and P-index do exist.
I see an INSERT-statement in the Session Monitor in DB02 with the following execution plan:

When looking at V$SQL_MONITOR I can see that the statement has a high User I/O wait time; most of the events are related to "db file sequential read". Also I can see, that the PHYSICAL_READ_BYTES is currently at 192.136.183.808 bytes = ~180 GB !!! The InfoCube all in all has only a size of about 15 GB according to statistics.
A look at the V$ACTIVE_SESSION_HISTORY shows, that the vast majority of the logged events are dealing with I/O waits ("db file sequential read") concerning object /BIC/EAPSAL011~P which is actuall the P-index!
Do you have any explanation, why there is that much I/O on this P-index? Seems to me, that the wait time for these I/O operations is causing the bad runtime.
Regards,
Philipp