In my client's place, I see a piece of code, which extracts data from a Cube using a single complex SQL statement with nearly 8 JOINs (5 dims, 1 fact and 2 SIDs). The performance was very bad and the extraction job started timing out 1 fine day. We checked all stats and indexes on these tables and they were fine.
Then,I suggested that this is not optimised way of getting data out of a cube and ideally we shd use an export data source or InfoSpokes as applicable.
On the very next day, the timing out extraction completed in few mins! to our surprise. Now, we check the indexes, they are corrupted/dropped! I'm still wondering what could've happened!!
My questions are
1> Is it advisable to read cube using SQL statement. What will be the performance impact if we do so. After all, even a BEX query uses SQL joins to get data out of the cube. What diff does that make??
2> Say, no. of rows read using an SQL statement on a huge cube (eg: 25 millon records), using a where clause on a smaller dimension (eg: 0CALMONTH), is less, will avoiding indexes for this SELECT benefit read performance?
I donno if I've made myself clear. Pls post if you need more details.
I would appreciate any BW experts opinion with strong DB background (I would request to refrain from very generic comments on this post pls).
btw we are on Oracle 9i.
Thanks & Regards,