cancel
Showing results for 
Search instead for 
Did you mean: 

DB index requirement when using BI accelerator

Masaaki
Advisor
Advisor
0 Kudos

Dear all,

I am now considering modeling which fill the following requirement.

- 1.3 billion data are stored.

- 14 million records are uploaded daily

- data are extracted from SD

I think..

- use BI accelerator (roll up 14 million records to BIa index takes less than 5 min)

- delta data are uploaded to InfoCube directly (not via DataStore)

- data are uploaded parallel

- data older than 5 years are deleted monthly. so DB partitioning is required.

- DB index is not required for BIa and for rolling up to BIa. and uploading time will be shorter without creating DB index

- but it is better to create DB index when executing compression.

I wonder in this case,

- whether it is better to use partitioning for making data deleting time shorter.

- whether it is better to execute compression (if not, DB table is partitioned too small, by request and by Time characteristic. but compression takes longtime...)

- then whether it is better to create DB index before comression (creating index takes sometime, but withiout DB index, compression must takes long time.)

- by some period BIa delta index should be marged with standard index, is it possible to report during this marge process?

Kind regards,

Masaaki

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

I will let Major of the talking to be done by SAP..However in regards to Compression and Deleting data heres the Rationale..

<b>Compression -</b>

With BI accelerator indexes you do not have to compress after rolling up data packages. The data on the BI accelerator server already exists in a read-optimized format.

However, in the following case it may be useful to rebuild the BI accelerator index, although this is not strictly necessary.

A BI accelerator index is created for an InfoCube that is not aggregated, or a large number of data packages are later loaded to this InfoCube. If you compress this InfoCube, more data is contained in the BI accelerator index than in the InfoCube itself and the data in the BI accelerator index is more granular. If compression results in a large aggregation factor (>1.5), it may be useful to rebuild the BI accelerator index. This ensures that the dataset is reduced in the BI accelerator index too.

<b>Deleting Data -</b>

If you delete data from the InfoCube selectively, the BI accelerator index has to be rebuilt. When you execute selective deletion, the system automatically deletes the affected BI accelerator index.

When you delete a data package (that is not aggregated) from an InfoCube, the index for the package dimension table is deleted and rebuilt. The facts in the fact index remain but are “hidden“ because they are no longer referenced by an entry in the package dimension table. Therefore, more entries exist in the index than in the table of the InfoCube. If you regularly delete data packages, the number of unused records increases, increasing memory consumption. This can have a negative affect on performance. In this case you should consider rebuilding the BI accelerator index regularly.

Hope it Helps

Chetan

@CP..