cancel
Showing results for 
Search instead for 
Did you mean: 

Semantically Partitioned InfoCube and DSO Questions

Former Member
0 Kudos

Hi,

We are interested in using Semantic Partitioning, but have a few questions that I was hoping that you would be able to assist me with:

1. How would one re-model a Semantically Paritioned InfoCube/DSO object? As it seems you can modify the template, but how will it affect the current data that was loaded into the partitions (as well as the transport of the template with the re-generation of the objects in the new nvironment)? Or should re-partitioning be used (as I'm not able to see the semantic paritioning object in the re-partitioning selection)?

2. I've read a document that states that for DataStore objects, SAP recommends that we should not save more than 20 million data records. Otherwise the activation process takes considerably longer. For InfoCubes, SAP recommends not to save more than 200-400 million data records. Otherwise the compression and reconstruction of aggregates takes too long. The semantically partitioned object does not have these implicit restrictions, as the data quantity is distributed between several data containers.

I then further read a document that mentions that if a InfoCube starts to have more than 100 mullion records one should start looking at semantically partitioning it (depending on the hardware and database that is used), but then it also states that one should try to have less than 3M records in each partition.

Therefore, I would like to raise the question: What is the recommended amount of records that can be contained by a partition if one is using
a BI Accelarator and have InfoCubes that contain more than 700 million records, but still executes reports within reasonable response time?

3. Is there any other factors that should be taken into account when using Semantic Partitioning.

I would really appreciate it if you can answer these questions as we would like to use semantic partitioning going forward, but would like to structure it correctly.

Regards,
Tanya

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

we are using SPO as infocube and DSO. A special situation is the remodeling of the SPO. If you extend your model the system changes the partition and the transfer rules in dev system correct. But after the import in the subsequent systems we have big issues with the activation of the internal transformation from the internal infosource to the partions. Right now we have a problem with the activation of data in the DSO after remodeling the SPO. We receive the correct data in new data table of the partition, but after the activation process the new fields are empty. Remodeling - we are not quite sure - needs in every case a deletion of the content of the partitons. But - we are not sure because this makes no sense in case of extension of a partition.

Regarding your questions about the sizing - we are working with a smaller amount of data and therefore I can't help you.

A remark for using SPO as DSO. The DSO as SPO only support full load from the DSO to target infoprovider. Delta is only possible from the partitions.

Hope this helps you. Feel free to contact me in case of further questions.

J.

Former Member
0 Kudos

Hi Jurgen,

Thank you very much for the response. This is very useful because if we decided to go the SPO route we need to be able to extend our model with ease and with the volumes that we have we are not able to easily drop and reload the data.

Regards,

Tanya

Answers (0)