Hello,
We redesigned a cube here and we are now loading data from it into the new one. It is a huge cube - 1.1 billion records - totally compressed and partitioned by 0FISCPER. Each 0FISCPER has approximately 55M records. I'm filtering the data by individual 0FISCPER when loading the cube, so my understanding is that the system should look for data only in the appropriate partition. However, the system does a full scan in the whole cube, and the load takes forever. I deleted the indexes in the target cube to speed up the load, but it still takes 6 hours just reading data for a single 0FISCPER (and another 6 to load the data in the target cube). Do you have any suggestions of other things I could do to improve the load performance, and also to have the system read specific partitions rather than the whole cube when extracting data?
Thanks,
Karina