cancel
Showing results for 
Search instead for 
Did you mean: 

Cube partitioning and the use of the BIA

Former Member
0 Kudos

Hi there,

Can anyone tell me if it is still wise and usefull to partition a Cube when using the BIA too? Does the BIA read the Cube data different (faster) if the Cube is partitioned or is using BIA enough and has partitioning no influence of performance.

hope anyone can help me with this !

Steven

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

I assume we are talking about using the DB range partitioning feature to partition the E fact table on either 0FISCPER or 0CALMONTH. If that's the case, I have to agree with Srikanth for the reasons outlined.

I haven't upgraded to BI 7.0 with BIA yet, but it sound like Sascha is talking about some sort of partitions on the BIA.

Couple of other advantages of partitioing the E fact table, depending on criteria used, having the E fact table partitioned could help with performance on Selective Deletion and Archiving runs.

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi Steven, I agree with Srinkath. It is wise to have a partitioning strategy for the InfoCube in place, even If you use BIA.

One additional aspect is the maintainability of the Database. I believe it is much more easy to administrate and reorganize the Database, if the InfoCube is partitioned.

I hope this help, Michael

Former Member
0 Kudos

Hi Steve,

Yes, I say its wise to use partitioning on cubes, irrepective of whether it is on BIA or not. This has the following advantages.

1. Parellel initial fill of BIA is only possible if partitions on E table exist. Otherwise, it will be one job for entire E table to fill the indexes on BIA.

2. There might a chance where Blades go down sometimes or incase you wish to upgrade the BIA hardware. We can ensure business continuity inspite of BIA non availability. (Minimize runtime to certain extent in case BIA doesnot function).

3. There will not be any change from read performance perspective. Because the Fact data is maintained as a single object in BIA.

So better we always have the same partitioning strategy as earlier.

Hope this helps.

//BR,

Srikanth.

Former Member
0 Kudos

Hi Srinkath,

Please allow me to add some technical remarks to your statements as I respectfully disagree.

Each table as part of a cube on BW will be mapped so to say to an index on BIA side. When a cube is indexed in BIA, the BW sends an estimation of the size of each table when its respective index gets created. Depending on a certain (configurable) criterion, the BIA decides if this index will be split into physical parts. The number of parts also depends on certain settings which will be done with respect to the number of processor cores on each blade. However, from the BW system point of view, there always exists only one (logical) index, no matter in how many parts it got split. Usually, only large tables like the F table will be split on BIA. As a consequence, it will not have an effect on performance what you do on BW side as the BW cannot influence at all how BIA handles its indexes.

As a side remark, keep in mind that there is a tradeoff between the number of physical parts in which an index gets split and the search performance. Obviously, the physical parts determine the degree of parallelization of requests on BIA, however, the part results also need to be joined together at the end for the final result to be returned.

In case that BIA is down, there is the possibility to have the BI system automatically redirect all queries against the database. You just need to configure the failover within the BW. Then, if BI realizes that the BIA is not available it will run all queries against the database for half an hour and then check again the availability of BIA. In the positive case, it will switch back to BIA again. Nevertheless, I think the probability that the whole BIA system will fail is quite low. There are also approaches to configure BIA itself for high availability.

I hope this has helped to clarify the issue a little bit.

Best regards,

Sascha.