cancel
Showing results for 
Search instead for 
Did you mean: 

Performance Problem:MultiProvider or Aggregate

Former Member
0 Kudos

Hi All,

     I want to make report from a infocube A with 100 million records. They are about the outfit sales records from 2011 to 2013. It is too slow to use the data in cube A directly. So I try to improve it with two different ways.

     The first way:

     I  plane to create twelve infocubes with six dimensions , twelve characteristics and three keyfigures. They store one month sales data of each year in the time range respectively.(Totally three month's sales data in each cube)

     Although the cube only has twelve characteristics, so I can put them  into twelve dimensions and set the dimensions as a line item, am I right?

     Then I should set the error handling as deactivated in the dtp's updated tab. This can improve the transfer performance. Anyother operation can I do to improve it?

     In this case, I don't have any routine and transfermation on data format. That's all.

     The second way

     This way is so simple. I will create twelve aggregate on the big old cube and respectively restrict them on year characteristic. Make the same effect like logical partition on time. Can it work?

Accepted Solutions (1)

Accepted Solutions (1)

former_member182470
Active Contributor
0 Kudos

HI Li,

I am sorry to say your ideas not so good.

You can think of Re-partitioning concept to split the cube into multiple partitions by using 0calmonth. This will improve the data loads also.

You have to create powerful aggregate to be used by your query. You have to consider only the chars which are used in your query definition.

You can do logical partitioning by creating two cubes for each year each. Finally build a maultiprovider on top of them.

Answers (4)

Answers (4)

Former Member
0 Kudos

Hi all,

Thanks to everyone here.

What about this solution?

Actually, I need 2011, 2012 and 2013 years data. So I should creat three cubes as logical partition.

Then I should do physical partition for them.

Finnally, create agregations for them.

I don't know how many resources will it cost while doing re-partition.

And  I've done the aggregation without re-partition. That operation result in many process chains'  interrupt. Compressing took a long time.

So I choose the logical partition as the most important operation in these advices.

former_member182470
Active Contributor
0 Kudos

Hi Li,

It's better to go with logical partitioning. The below thought is correct

What about this solution?

Actually, I need 2011, 2012 and 2013 years data. So I should creat three cubes as logical partition.

Then I should do physical partition for them.

Finnally, create agregations for them.

You have to create multiprovider at the end to do reporting on it.

Re-partitiong is only a physical partitiong. You will be still forced to continue with single cube which again a headache for you.

Regards,

Suman

RamanKorrapati
Active Contributor
0 Kudos

Hi Li,

If in that case then you can go with making of 3 info cubes which can stores 3 yrs data respectively. But its time consuming and one time activity. Even thought reporting performance will fast as your cubes stores only one year data respectively.

You need to club them in multi-provider. that would be more useful to do reporting.

Thanks

Former Member
0 Kudos

Hi Raman,

Are you meaning parallel running can improve the performance?

I'm following what you said to model in these day. I think it will be a interesting experiment.

I'll start a new thread to ask some questions about RSRT T-CODE. I never uesd it before. But I think RSRT surely will be useful in this test.

RamanKorrapati
Active Contributor
0 Kudos

Hi Li,

RSRT is one of t code to debug bex query and lot of fucntions are there check query performance and olap cache settings.

Always better raise new thread when its not related to same content on opened thread.

Hope you understand.

Thanks

RamanKorrapati
Active Contributor
0 Kudos

Hi Li,

In this case you can check below options

1. You can create variable, put them as mandatory or optional. if  users need data months wises , as per the use input data will retrieve.

2. Aggregates, first find the which of the info objects very frequently used at bex level, build aggregates on them. it may improve bex reporting performance.

3. you have 2yrs, try to do partition on 0calmonth wise if you cube have 0CALMONTH Object.

if you thought to go 0calmonth based partition then you need crate 26(12X2 = 24+2) partitions.

4. considering line items also one of the useful concept.

Thanks

Former Member
0 Kudos

Hi Li,

Actually in where you looking performance. In Query level or In Loading Level.

If in Query level use the customer exit variable to restrict the data.

In Loading level use the Re-partitioning concept as suman chakravarthy told in above post.

Thanks,

Purushotham.

Former Member
0 Kudos

Hi Li,

why not to try using variables, so the user have to choose just one month when displaying data. Than you try to create secondary indexes within your infoCube and using aggregates. It should help you improve performances without creating as much cubes as you sais.

Regards

Salah

Former Member
0 Kudos

Hi Salah

Thank you for your reply.

My boss need all three years data to do some decisions.