Skip to Content

MaxDB backup to file and deduplication


We make a daily fullbackup from our MaxDB ( to one file on disk.

We would expect that there aren’t a lot of differences in these files (appr. 310 GB each day).

At the moment we are using deduplication for a testing scenario (software protectier from diligent) on these files. Unfortunately, the dedupe-engine cannot detect any recurrent patterns. Only the known high compression of 4:1 can be reached. We tried the backup with both CommVault and NTBackup, but the result was the same: no dedupe possible.

Do you know a scenario with MaxDB and deduplication? Do you know any problems? Do you have any explanation, why two backups of the same SAP system have so many differences? Are there any parameters which we can use for make the backup dedupe-able?

Thanks in advance!


Add comment
10|10000 characters needed characters exceeded

  • Follow
  • Get RSS Feed

4 Answers

  • Best Answer
    Apr 29, 2008 at 03:10 PM


    unfortunately is the order of data pages in a backup file non deterministic - unless you have a just single data volume. So deduplication will only work if it operates on a 8KB granularity.


    Add comment
    10|10000 characters needed characters exceeded

    • Hi Sebastian,

      yes, "data volume" is the word for "devspace" since MaxDB 7.5.

      Anyhow, for good system performance it's absolutely necessary to have multiple devspaces so that I/O can be parallelized.

      But even with a single data volume the write-on-change behaviour of MaxDB would prevent an efficient "de-duplication".

      Whenever a single row in a page is changed in MaxDB, the page is copied, the data is changed and the page is marked as the "current" one.

      With the next savepoint these changed pages will be written to disk where they are written to a location that is currently free, somewhere in the data area.

      Also when this happens the converter gets updated to now point to the correct (changed) physical locations of the pages.

      Now, when you perform a backup, the pages are read by moving through the converter and getting the page locations by that. So basically the order in which the pages appear in the backup file is the order in which the pages are found by interating through the converter.

      Thus even if only a few pages are changed, the order of pages in the converter will change quite a lot.

      For a more detailed description on how the storage mechanism works you may want to check the [MaxDB Internals Course|]. See the chapter ["No-Reorganization Principle; Data Storage Without I/O Bottlenecks"|] there.

      One option for you to get smaller database backups is to use incremental backups. For these the database itself determines which pages had been changed since the last complete data backup and saves only these.

      Finally, to give something to expect and hope for :-):

      it has already been thought of to include backup compression/encryption to the MaxDB.

      But don't ask when this will be implemented... (my guess - MaxDB 8.0 😊)

      Best regards,


  • Apr 29, 2008 at 05:07 PM


    thank you for your explanation.

    One last question:

    Do you know, how other databases with a SAP system handle the deduplication?



    Add comment
    10|10000 characters needed characters exceeded

    • Hi Sebastian,

      to my knowledge, the other supported RDBMS (Oracle, MSS, Informix and DB2) all use fixed positions for the rows.

      (Just by this a ROWID as a physical locator makes any sense).

      The big downside of this is the need for reorganisation - something that doesn't need to be done with MaxDB.

      KR Lars

  • avatar image
    Former Member
    May 09, 2008 at 02:58 PM

    Hi Sebastian,

    We will be testing a range of backup products, including de-dup vendor applicances for our Windows 2003 x64 / MaxDB x64 environment over the next 4 - 6 weeks.

    We plan to trail the following software products:

    1. Symantec - Netbackup

    2. CommVault - Galaxy Data Protector

    Hardware, de-dup appliances:

    1. Data Domain

    2. Quantum

    I will let you know how we get on.



    Add comment
    10|10000 characters needed characters exceeded

  • Jul 16, 2008 at 09:48 AM


    Data Domain as introduced the DD Compression Typ (10) in DD OS v4.6. This solves the problem. It has been tested in different environments.

    Please contact Data Domain in case of further questions.

    Best regards,


    Add comment
    10|10000 characters needed characters exceeded