Skip to Content

Sybase IQ 16.0 SP04 - How to compress Backup

Dear Gurus,

Our Sybase IQ backup size is 1.66 tb . We want to compress backup.

As i searching related below admin guides i couldt find any compress feature or command.

Is there any alternate way to take backup in smaller size ?

SyBooks Online

Sybase IQ version : 16.0 SP04

We are usin gSybase IQ for ETL Data Services and for other BO tools.

Best Regards

Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

2 Answers

  • Best Answer
    May 03, 2016 at 09:28 AM


    there is no IQ functionality to compress the backup. You have to compress it afterwards using os level commands.

    Best regards,


    Add comment
    10|10000 characters needed characters exceeded

    • Some of the available compression tools may have a source or target file size limit (e.g. 4GB with a 32-bit compressor). What are the sizes of the stripes you are backing up to?

      You will get some compression of the IQ backup files, but remember that IQ itself performs final buffer compression before writing the pages to disk anyway. Any compression of the backup may be minimal or a waste of time/CPU cycles - so test.first.

      Besides tools such as Winzip, etc, you may also want to investigate tools such as Cygwin, which has a 'tar' command (tar -cz for compression) compatible with Linux/Unix that can compress the stripes.

      Also, If you are only wanting to compress to save disk space but not move the files, you might want to mark the NTFS file folder as compressible in Windows and backup the stripes directly to that folder, or copy to that folder later.


  • May 03, 2016 at 11:36 AM

    Hi Kemal,

    the SAP official tool is SAPCAR [SAP Archiv - SAR-Datei mit SAPCAR entpacken], but I guess it's not significantly better or worse than any other .zip or .rar or whatever else compression utility.

    I don't think you should expect too much from compression, regardless.

    The content of the IQ backup archive consists of the catalog store (always completely, .db + .log) and the persistent IQ store (completely or partial, depending on the type of backup you execute). IQ data are stored with (sometimes double) compression applied anyway. The dictionary compression is optional, but storage level compression is applied to all data. Depending on the algorithm and parameters, some more compression could be possible, but I'd be very surprised if you can achieve a double digit percentage, since storage compression does a similar job anyway, transparently and on the fly. So the only part of your archive with a significant potential for compression is the catalog store, and the overall compression rate will depend on the mix of catalog store data and IQ data in the archive. If you follow the good practice of keeping your catalog store small, getting 70% out of 500 MB (as an example) will buy you 350 MB, which I consider too little to start the hassle with whatever packer you prefer.

    I think the recommendation to keep the .log file short (which means you'll have to cut and re-start it occasionally) may over time be more benefitial than packing the archive.



    Add comment
    10|10000 characters needed characters exceeded