cancel
Showing results for 
Search instead for 
Did you mean: 

Compression on SQL Server 7.0

Former Member
0 Kudos

Does anyone have experience of compression on SQL Server 7.0?

Is the E table structured any differently than the F table. For e.g. in Oracle, F table is partitioned by request ID and E by time. So even, if number of records is not reduced during compression, performance improves.

Any other suggestions for improving performance on SQL server like do not use auto update on stats or something?

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi

There is no partitions in SQL server so the E and the F table is almost the same (except for the request id etc). So what do you gain from the compression? If you load big amounts of data you would normally drop the indexes before the load, the indexes dropped is only on the F-table so if you don't compress it will take a long time to regenerate the indexes.

I haven't found any other hints for improving performance specific to SQL server, but if you find some please let me know.

Regards

Kristian

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi,

Maybe the following document will help you. It is about SQL server 2000 but there are also some recommendations in it for SQL server 7.0.

https://websmp205.sap-ag.de/~sapidb/011000358700001409362005

Former Member
0 Kudos

Aneesh,

In SQL Server 7.0 partitioning is not possible. Will be in SQL Server 2005 but this is not yet released.

The only difference between the E-Fact and the F-Fact tables is one index which is not available in the e-fact table. This index includes all fields including the P-Dimension (requid), which is useless in the E-Fact table.

You can find these differences in SE11 and the button index.

However, by compressing the data you may reduce the number of records which may result in an increase in performance. Don't expect too much.

The best thing too increase performance is to build some aggregates on your cube and test for the results using RSRT in Debug mode.

Second, you can use precalculation for preheating the OLAP cache. This is not as flexible as the aggregates (less drill down possibiliets) but boosts performance. This option is great for reports without drilldown capabilities

Former Member
0 Kudos

We are already doing all of those things. What we are missing on is what can be done on SQL Sever to improve the performance. CPU utilization is low, disk I/O is low but SQL statements still take forever to execute!