cancel
Showing results for 
Search instead for 
Did you mean: 

Compression & Response time

chris_robertson
Participant
0 Kudos

We compressed a cube for the first time on 4/16. I was very pleased that we compressed the data about 37% from 8.9 million rows to 5.6. I expected response time to improve.

I hear today that performance was very poor last week. I used ST03N to compare the week before to the week after compression. There is definitely a decline in query performance and it is specifically the database time that declined.

Though I'm not positive that the compression had anything to do with the poor response time, I can't think of any other activity that occurred doing this time frame. Has anybody ever run into a scenario where response time degrades after compression? Any idea of what I can look at to get my hands around this?

Thanks,

Chris

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

I assume you are using ORACLE. Be sure to refresh Data Base Statistics (performance tab in manage InfoCube). The Data Base optimizer is probably basing its execution path on the old statistics of the E-table when it contained zero records. Therefore, it would decide to perform full-table scans rather than using the indexes.

chris_robertson
Participant
0 Kudos

We're using DB2. Statics seem to be up-to-date but I will ask a few people here to make sure I understand this.

somnathkumar
Active Participant
0 Kudos

How about deleting and recreating the indexes on the infocube? The indexes might have been degraged by the compression.

chris_robertson
Participant
0 Kudos

Thank you for the posts. It got me to thinking and I felt we need to do a table reorganization. Since we had 8.9 million rows in the 'F' table before compression we probably have a 8.4 million row hole in the 'F' table after compression. I verified it with basis and we are going to schedule a reorg soon. (The reorg might be a DB2 thing, I'm not sure how other databases handel this.)

0 Kudos

Hi Chris,

We have a similar issue as we are using UDB. The issue occurs when the highly populated F table jacks up the high water mark to it max size, which in your case is 8.9 millions rows or abt 4-8 gigs (I presume). The only way around this is a reorg, which in itself is a tedious and time consuming activity. A necessary evil but has to be done.

Regards,

Zubin

Former Member
0 Kudos

Just an FYI on the way Oracle handles this situation -

Each request is loaded to it's own partition in the F fact table. When compression is performed, the data is summarized/moved to the E fact table and the F fact table partitions are dropped. Consequently, Oracle does not have the issue with statistics that DB2 has in this particular situation and there is never a need to reorg the F fact table.

This I believe this goes back to a lack of Range Partitioning in DB2. I think if you get the F fact table reorged, you'll get the better results you are looking for.

It's always worth a look at the explain plan to see what other full table scans might be occurring besides the fact tables (Actually, in Oracle with Star Transformation enabled, I don't believe you ever see a full table scan of the fact tables). You might have some large dimension tables or master data tables that could use a secondary index.

Former Member
0 Kudos

FYI,

we are going to compress PCA cube it has 46 mil records, and we are using DB2. Do i need to drop the indexes and rebuilt it. What will be the best bet, or i could start with compressing quartely data and built the indexes after quartley compression.

2) or compress all the data at once and then drop/built indexes.

Any ideas will highly appreciate, and plus Points (;

Thanks

Qandeel

Former Member
0 Kudos

To all.

There is a useful OSS Note, 407260 'FAQs: Compression of InfoCubes' that (even if dedicated to Oracle-based BW) gives a lot of info about this topic !

Hope it helps!

Bye,

Roberto

somnathkumar
Active Participant
0 Kudos

Best option is,

1. Delete the indexes

2. Compress the cube

3. Recreate the indexes

Compression will also take less time if there are no indexes.

Former Member
0 Kudos

I would start out compressing a relatively small portion of the table to establish a benchmark, perhaps one month of data. Then increase the amount of data you will compress if you find that run time/resource of the compression usage won't impact your daily loading.

I would not perform it all at once. You never know if you might get bit with a temp space problem or something else. Compressions can fail and there are recovery processes, but recovering a small data is going to be easier.

former_member93896
Active Contributor
0 Kudos

Hello,

if I hear this: 8.9 million records, 46 million records... argh! Compression is highly recommended. Don't let the F-table become that big and you will be spared from many issues.

Everyone please read SAP note 590370 (even if your system is not Oracle). Compress, compress, compress.

Regards

Marc

SAP NetWeaver RIG

chris_robertson
Participant
0 Kudos

Marc,

I appreciate the note. I've pulled it up to read it. But if I may say so repectfully, we understand the 'Benefits' of compression, that's why we did it. Our problem right now is that the reports are actualy performing <u>much</u> worse after our compression.

Any notes on debugging compression related issues?

Chris

Former Member
0 Kudos

You really need to compare Explain Plans before and after compression to see if there are any differences besides the reference the E or F fact table. Might be too late if you have already compressed and you don't have a way to generate an Explain Plan before compression, unless you want to reload your cube (ughh!).

Teaching Moment - Any time you are going to make a change, like compression, you should get an Explain Plan for some of the typical queries before the change, and then get them after the change. This would really, really, really help identify where you might be having a problem.

When you have data in the F and E fact tables, you now have two queries to run, one against the F and one against the E and the results are merged by the OLAP processor - could be something there causing a problem, but I wouldn't think so.

So -

What you should look for in the E anf F fact table query Explain Plans is if you have some large dimension or master data tables that the query must join to the fact table and that full scans (sequential reads) of those dimension or master data tables tables are occurring and represent a large portion of your total query cost.

Since compression now results in 2 queries running, you may be joining to the large dimension or master data tables twice, which could account for your performance.

If this is the case, you may or may not be able to do much about it depending on the query and / or cube design.

You may be able to add additional indexes to some of the dimension or master data tables involved which could eliminate the large table scans.

Probably need a DBA with Performance Tuning expereience at this point depending on your RDBMS experience. Secondary indexes on dimension and master data tables would need to be made be created by your DBA.

If the Explain Plans don't identify the problem, I would open a message with SAP to see if they can get you someone with BW on DB2 experience to talk to since your problem seems to defy common sense - fewer rows to read should = faster query time.

Also, SQL trace might identify possible problem area.

chris_robertson
Participant
0 Kudos

Thanks Pizzaman.

I've spent most of my morning looking at the explain plan so I feel that we are on the right track. I see two possible issues in most querries. One is a possible index need on a master data table that is almost always used. I'm also thinking that I might be seeing some full table scans. I'm meeting shortly with some poeple who can help me understand what I'm seeing.

chris_robertson
Participant
0 Kudos

It looks like it is an issue with the execution plans. I don't have the before to compare it with so I'm not certain. But to all who helped or are curious, here is what I'm seeing.

Query x took 15 second to execute on DB before compression when F had 8.9 mil rows & E was empty.

Query x takes 125 seconds to exectue on db after compression where F has .5 mil rows & E has 5.9 mil rows.

SQL trace shows that F query takes 122 seconds & E query takes 3 seconds.

The SQL on E & F looks identical with the exception of the E & F tables. The exection plans do not look the same.

F chooses the primary as its index and then does an 'Index Scan-Key Selection'.

E chose three different dim indexes and then sets up what looks to me to be a temporary file and does various table scans. (Table scans and still performs better then the other plan?? I'm still learning what it all means.)

My assumption is that before the compression the execution plan on F was similar to what the exectution plan on E is now. Now that F has changed it chose a different exectution plan and thus the poor performance.

Now we need to find out how to optimize the exectution plan. We are establishing contact with some experts to help us understand/correct this problem.

Thanks for all those that contributed. Your input helped give us thinks to look at.

Chris

Former Member
0 Kudos

Chris,

Thanks for the insight and sharing your knowledge. if you come up with optimize execuation plan share with us.

Good luck

Thanks

Spall

Former Member
0 Kudos

Have you checked secondary indexes on your cube with DB02.

Former Member
0 Kudos

See Note: 710814

former_member93896
Active Contributor
0 Kudos

Hello Chris,

it is not your responsibity to "optimize" execution plans. It is the job of the database to find the optimal execution plan. If it doesn't - assuming indexes and DB stats are OK -, then it is a DB bug. Open a message on the BW-SYS-DB-* component and provide logon info and the query example so DB experts can take a look at it. If you add your findings regarding SQL and execution plans to the message, it will certainly help to narrow down the issue.

Regards,

Marc

SAP NetWeaver RIG

chris_robertson
Participant
0 Kudos

R B,

Yes, indexes look ok based on DB02.

We checked out the note. It applies to DB2 on Unix. Unix is not our operating system, but we still are looking at it and looking for other one simlar that might appy to os/400 operating system.

Thanks.

chris_robertson
Participant
0 Kudos

Marc,

Thanks. We are opening a message. Thanks for helping us identify the componet.

Chris

chris_robertson
Participant
0 Kudos

Final follow up in case any one was following this.

We changed the uncompressed days from 14 to 3 and ran compression. The results were dramatically positive. The execution plan while at 14 compared to the execution plan at 3 was different.

As to why the execution plan performed poorly before, SAP found two errors (SQL7917 & one with DBMON) that they recommended various PTFs for. We are also going to upgrade our kernel as our current one is quickly becoming out of date.

Thanks once more to all those who posted.

Chris Robertson

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Chris,

Kindly inform us any development on this problem. I have encountered this problem before. Query performance is the

same whether compressed or not. Still slow. We are running ORACLE 9.2.0.4.0.

I just compress for the purpose of saving disk space. Performance wise, still the same. No great impact whatsoever. Compression is always scheduled last in the Process Chain after all Info Package loads are finished.

Regards,

--Jkyle

chris_robertson
Participant
0 Kudos

Jkyle,

I plan to post my results when I'm understand them. Yesterday afternoon we did a reorg on the 'E' & 'F' tables involved in our compression.

One developer of the reports thought it ran better. I didn't see much difference. We'll have a better understanding by the end of today.

Thanks,

Chris

chris_robertson
Participant
0 Kudos

FYI - Table Reorg did not help.

Staticis up to date, Tables Reorged, Indexes rebuilt,

Open to more ideas...

Former Member
0 Kudos

Check your database parameters using RSRV.

Are all the parameters showing green with RSRV ?

chris_robertson
Participant
0 Kudos

I gave it a try. Green light for database parameters.

Thanks, Chris