on 06-13-2008 5:06 PM
Hi gurus,
I am working in a project with a high volume of data. The forecast is to load 1.250.000 records daily between new records and modifications. The queries will run over partitions in the DB with 10.000.000 of records every month.
I had done my model creating an DSO fot operative reporting with a the highest granularity and one infocube with low granularity. My problem now is I have seen that the 90% of reports requires the document and position numbers.
I am wondering if this infocube has sense and I am thinking to create the infocube with a high level of detail adding the document number.
I have thought to partition the DSO by months in ORACLE because ORACLE allows to partition the DSO directly in the DB.
I will appreciate if somebody with experience in projects with high volumes of data could give me some advices about how to manage the data.
The reports are very operative and don't requires to navigate only to show the data.
Thanks in advance.
Hi,
Define the documents and position nos as line item dimensions in your cube.
For the query itself, don't have the document nos and positions in the initial query set, but have them as free chars. That way query execution will be faster initially.
But see if you can convince the client to have the doc nos and positions in the ODS itself. There's going be huge overheads if you put them in the cube with this amount of data, even after doing performance tuning.
Also for the ODS, would suggest you switch off the BEx reporting/SID activation if on BI7 and create an infoset on top of the ODS for the granular reports.
Cheers,
Kedar
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks a lot for your replies.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Thanks for your replies but I have some questions about your replies.
The first one is about BI accelerator. I know you can improve the performance when you works with aggregate information, but has it sense to implement BI accelerator if you want to see the informationa at document detail?
Then another question is regarding the first post about why the performance is better if you creates the queries over an infoset than in a DSO.
Thanks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Adding to what is already said above.
I will give you some general tips to improve performance.
Modelling
Cube Modelling
Using aggregates and compression.
Note 356732 - Performance Tuning for Queries with Aggregates
Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
Query level
Using less and complex cell definitions if possible.
Using Caching.
1. Avoid using too many nav. attr
2. Avoid RKF and CKF
3. Many chars in row.
4. Using free chars wherever possible.
ODS Query Performance
In your case why dont you explore the possibility of having Accelerator as your data volume is very high.
Also there are some SAP papers on installations with very high volume of data. You will get some insight from this.
Large Data Warehouse Implementations
Experiences with SAP NetWeaver® Business Intelligence at 20 Terabytes
Hope this helps.
Thanks,
JituK
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
89 | |
10 | |
9 | |
9 | |
9 | |
6 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.