cancel
Showing results for 
Search instead for 
Did you mean: 

High volume model

Former Member
0 Kudos

Hi gurus,

I am working in a project with a high volume of data. The forecast is to load 1.250.000 records daily between new records and modifications. The queries will run over partitions in the DB with 10.000.000 of records every month.

I had done my model creating an DSO fot operative reporting with a the highest granularity and one infocube with low granularity. My problem now is I have seen that the 90% of reports requires the document and position numbers.

I am wondering if this infocube has sense and I am thinking to create the infocube with a high level of detail adding the document number.

I have thought to partition the DSO by months in ORACLE because ORACLE allows to partition the DSO directly in the DB.

I will appreciate if somebody with experience in projects with high volumes of data could give me some advices about how to manage the data.

The reports are very operative and don't requires to navigate only to show the data.

Thanks in advance.

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

Define the documents and position nos as line item dimensions in your cube.

For the query itself, don't have the document nos and positions in the initial query set, but have them as free chars. That way query execution will be faster initially.

But see if you can convince the client to have the doc nos and positions in the ODS itself. There's going be huge overheads if you put them in the cube with this amount of data, even after doing performance tuning.

Also for the ODS, would suggest you switch off the BEx reporting/SID activation if on BI7 and create an infoset on top of the ODS for the granular reports.

Cheers,

Kedar

Answers (4)

Answers (4)

Former Member
0 Kudos

Thanks a lot for your replies.

Former Member
0 Kudos

Hi,

Thanks for your replies but I have some questions about your replies.

The first one is about BI accelerator. I know you can improve the performance when you works with aggregate information, but has it sense to implement BI accelerator if you want to see the informationa at document detail?

Then another question is regarding the first post about why the performance is better if you creates the queries over an infoset than in a DSO.

Thanks.

former_member345199
Active Contributor
0 Kudos

Hi,

Adding to what is already said above.

I will give you some general tips to improve performance.

Modelling

Cube Modelling

Using aggregates and compression.

Note 356732 - Performance Tuning for Queries with Aggregates

Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)

Query level

Using less and complex cell definitions if possible.

Using Caching.

1. Avoid using too many nav. attr

2. Avoid RKF and CKF

3. Many chars in row.

4. Using free chars wherever possible.

ODS Query Performance  

In your case why dont you explore the possibility of having Accelerator as your data volume is very high.

Also there are some SAP papers on installations with very high volume of data. You will get some insight from this.

Large Data Warehouse Implementations

https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c04aa1fe-0fa2-2a10-b78f-be514604...

https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/media/uuid/dcd29c7a-0701-0010-ceb3-8ac3918f1a...

Experiences with SAP NetWeaver® Business Intelligence at 20 Terabytes

https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/406a36d7-99b0-2a10-8b89-f91b4a02...

https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/d56bbf90-0201-0010-53aa-92351212...

Hope this helps.

Thanks,

JituK

Former Member
0 Kudos

Hi,

Take a look at the performance and tuning site there you can find related docs.

Regards.