We are interested in using Semantic Partitioning, but have a few questions that I was hoping that you would be able to assist me with:
1. How would one re-model a Semantically Paritioned InfoCube/DSO object? As it seems you can modify the template, but how will it affect the current data that was loaded into the partitions (as well as the transport of the template with the re-generation of the objects in the new nvironment)? Or should re-partitioning be used (as I'm not able to see the semantic paritioning object in the re-partitioning selection)?
2. I've read a document that states that for DataStore objects, SAP recommends that we should not save more than 20 million data records. Otherwise the activation process takes considerably longer. For InfoCubes, SAP recommends not to save more than 200-400 million data records. Otherwise the compression and reconstruction of aggregates takes too long. The semantically partitioned object does not have these implicit restrictions, as the data quantity is distributed between several data containers.
I then further read a document that mentions that if a InfoCube starts to have more than 100 mullion records one should start looking at semantically partitioning it (depending on the hardware and database that is used), but then it also states that one should try to have less than 3M records in each partition.
Therefore, I would like to raise the question: What is the recommended amount of records that can be contained by a partition if one is using
a BI Accelarator and have InfoCubes that contain more than 700 million records, but still executes reports within reasonable response time?
3. Is there any other factors that should be taken into account when using Semantic Partitioning.
I would really appreciate it if you can answer these questions as we would like to use semantic partitioning going forward, but would like to structure it correctly.