Skip to Content
avatar image
Former Member

Sybase Compress Level

Whats the Ideal compression level for Sybase DB backups?

Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

2 Answers

  • Oct 09, 2017 at 04:08 PM

    There is no value that is ideal for everyone. The values are a tradeoff (with diminishing results) between time taken doing the compression and resulting output file size. You have to decide what is more important to you, speed or size.

    However, in general the two newer levels 100 and 101 are better than the older levels 1-9.

    Add comment
    10|10000 characters needed characters exceeded

  • Oct 09, 2017 at 04:36 PM

    Depends on:

    - ASE version which in turn determines which compression libraries are available to you (::compress:: vs compression 1-9 vs compression 100-101).

    - available cpu cycles on host where the compression is taking place; compression requires cpu cycles, so the longer it takes to compress your data, and the larger the number of concurrent compression operations, the less cpu is available for other processes (eg, dataserver) running on the same host

    - in some environments I've seen database/log file compression disabled because it can slow down the compression algorithms of OS-level/filesystem-level backups

    - why you're compressing in the first place; do you have limited disk space for dumps? are you writing to slow disks (and therefore need to limit the amount of MBs written to disk)? do you need to copy the dumps across a slow network (ie, need to reduce the size of the files being copied)?

    Ultimately you'll need to run tests in your environment to see what makes sense.


    As Bret's mentioned, the newer compression levels (100/101) are typically more-efficient/faster than the previous set of compression levels (1-9), which in turn are more-efficient/faster than the first-generation compression libraries (::compress::).

    Add comment
    10|10000 characters needed characters exceeded